Test Report: Docker_Linux_crio_arm64 21800

                    
                      bb40a8e434b348a4cf46a27f5566e4aff121b396:2025-10-29:42116
                    
                

Test fail (41/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.61
35 TestAddons/parallel/Registry 14.86
36 TestAddons/parallel/RegistryCreds 0.48
37 TestAddons/parallel/Ingress 145.46
38 TestAddons/parallel/InspektorGadget 6.26
39 TestAddons/parallel/MetricsServer 5.45
41 TestAddons/parallel/CSI 40.29
42 TestAddons/parallel/Headlamp 3.24
43 TestAddons/parallel/CloudSpanner 5.3
44 TestAddons/parallel/LocalPath 8.52
45 TestAddons/parallel/NvidiaDevicePlugin 5.28
46 TestAddons/parallel/Yakd 6.25
97 TestFunctional/parallel/ServiceCmdConnect 603.56
116 TestFunctional/parallel/ImageCommands/ImageListShort 2.25
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.07
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.1
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.38
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.43
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.25
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.52
146 TestFunctional/parallel/ServiceCmd/DeployApp 600.92
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.5
153 TestFunctional/parallel/ServiceCmd/Format 0.5
154 TestFunctional/parallel/ServiceCmd/URL 0.64
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 529.71
174 TestMultiControlPlane/serial/DeleteSecondaryNode 9
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 3.1
191 TestJSONOutput/pause/Command 2.5
197 TestJSONOutput/unpause/Command 2.11
248 TestPreload 447.61
281 TestPause/serial/Pause 6.54
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.49
303 TestStartStop/group/old-k8s-version/serial/Pause 7.12
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.65
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.15
321 TestStartStop/group/no-preload/serial/Pause 6.45
327 TestStartStop/group/embed-certs/serial/Pause 7.45
331 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.41
338 TestStartStop/group/newest-cni/serial/Pause 6.35
341 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.42
350 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.5
x
+
TestAddons/serial/Volcano (0.61s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-757691 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-757691 addons disable volcano --alsologtostderr -v=1: exit status 11 (610.182781ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:23:12.345802   11323 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:23:12.347265   11323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:23:12.347310   11323 out.go:374] Setting ErrFile to fd 2...
	I1029 08:23:12.347332   11323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:23:12.347632   11323 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 08:23:12.347955   11323 mustload.go:66] Loading cluster: addons-757691
	I1029 08:23:12.348508   11323 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:23:12.348555   11323 addons.go:607] checking whether the cluster is paused
	I1029 08:23:12.348741   11323 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:23:12.348780   11323 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:23:12.349404   11323 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:23:12.383952   11323 ssh_runner.go:195] Run: systemctl --version
	I1029 08:23:12.384003   11323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:23:12.418136   11323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:23:12.527432   11323 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:23:12.527519   11323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:23:12.562327   11323 cri.go:89] found id: "ee8944794e8050551c59ad29f1e3e516d055471261079ddb98ad1b18d85f8d62"
	I1029 08:23:12.562351   11323 cri.go:89] found id: "32f7a28d2d03b12a04f38527066ab5cdace38391dbd7e81a25de50ac95ea189d"
	I1029 08:23:12.562356   11323 cri.go:89] found id: "239ec534461a096cf94705920f445c2256dd88aaa699d21479b90194a3837f9b"
	I1029 08:23:12.562360   11323 cri.go:89] found id: "0555333eb38f561643aa85f1253ffad88ad99d3734392074f633148511ce3081"
	I1029 08:23:12.562363   11323 cri.go:89] found id: "b7ebb9338f4b71874206cc6aa8143d99e673a9cca1b219506840b748ac705b60"
	I1029 08:23:12.562367   11323 cri.go:89] found id: "861cd9d17d1a25a1554adc0ae16a417206ae256ce09efb8acbb8fbdfd34b1733"
	I1029 08:23:12.562371   11323 cri.go:89] found id: "4f38205b7fd4d543287d30e2654b8b18c64c68ac9936ecc6de021a7f18188c65"
	I1029 08:23:12.562374   11323 cri.go:89] found id: "080445adfb2737e11888db144d48240f8f457851f5dd235ba8ac2de2d56a6f02"
	I1029 08:23:12.562378   11323 cri.go:89] found id: "525382941facb4662c4472842cc827c30b969d0ba588b1fe4bd1ab1a8be43d02"
	I1029 08:23:12.562385   11323 cri.go:89] found id: "c8fe768126de326968797194f6739f6b4dffc8edd42a7e3da422ab55d6c46d31"
	I1029 08:23:12.562388   11323 cri.go:89] found id: "444ef3af30aeb87e6a1cef7fe02d50c1eeb0628ff4d53cf0d6d76407448af653"
	I1029 08:23:12.562392   11323 cri.go:89] found id: "a89be2ad8c3cbb179996675c4f579261e541010fed42ffff33e36d897e051d6f"
	I1029 08:23:12.562396   11323 cri.go:89] found id: "03254ae94d330d94320842bf836194b38de9aa234ed810020b44739f573b3a1f"
	I1029 08:23:12.562399   11323 cri.go:89] found id: "380a55eebf3cdcc226730df7d2181cf069c2ff5fa31ba1bd7f7ecbdbb1a00c53"
	I1029 08:23:12.562402   11323 cri.go:89] found id: "dbc66dc27a6154e247feb539a4148136556f003707e138999e20759485b59218"
	I1029 08:23:12.562410   11323 cri.go:89] found id: "561fd8a7601359c5c1ac06320b6c023314bf2d9c888338eb6db0cb74cf760ad6"
	I1029 08:23:12.562414   11323 cri.go:89] found id: "bc4be5a012bc9f8e39fa97fa9dfd2e049f3d28d71ee13ad96c3db8f172403a78"
	I1029 08:23:12.562418   11323 cri.go:89] found id: "fb05a0521754d6e3abce78732cce5547c6dfcfddd236c0d82161786ca543e41b"
	I1029 08:23:12.562421   11323 cri.go:89] found id: "bdb041cabd34f35415d6aa99e1925090bda9745d10bfd7e1e4a7ce721cfb04de"
	I1029 08:23:12.562424   11323 cri.go:89] found id: "349c9103101d7725e278ac33a2d7d761e55f35837d834c1cec2dbbfe3add8d47"
	I1029 08:23:12.562429   11323 cri.go:89] found id: "6fb3b53c30069d80f0ce7ee16f7eedad1c380d15ce86f571d6bbe59e3f920970"
	I1029 08:23:12.562432   11323 cri.go:89] found id: "df417919fab6fd07c060b65a32c9220edeee697791536b0fa3a6e2baada5b377"
	I1029 08:23:12.562435   11323 cri.go:89] found id: "2a94afd232256c9970e37e3077aaf55baec83c1b05f44ac0cb94c7d529e48160"
	I1029 08:23:12.562439   11323 cri.go:89] found id: ""
	I1029 08:23:12.562487   11323 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 08:23:12.580973   11323 out.go:203] 
	W1029 08:23:12.583927   11323 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:23:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:23:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 08:23:12.583957   11323 out.go:285] * 
	* 
	W1029 08:23:12.861431   11323 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:23:12.864371   11323 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-757691 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.61s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 7.371431ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-rmhqh" [206ac621-1f76-46e0-a1fa-5072bef29b87] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003100932s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-wsh7n" [a1a89be0-a861-4f65-bbf5-bd788fa6a177] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004381487s
addons_test.go:392: (dbg) Run:  kubectl --context addons-757691 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-757691 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-757691 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.303644394s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-757691 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-757691 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-757691 addons disable registry --alsologtostderr -v=1: exit status 11 (256.383106ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:23:38.882442   12269 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:23:38.882592   12269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:23:38.882605   12269 out.go:374] Setting ErrFile to fd 2...
	I1029 08:23:38.882611   12269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:23:38.883978   12269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 08:23:38.884364   12269 mustload.go:66] Loading cluster: addons-757691
	I1029 08:23:38.884753   12269 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:23:38.884771   12269 addons.go:607] checking whether the cluster is paused
	I1029 08:23:38.884879   12269 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:23:38.884894   12269 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:23:38.885371   12269 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:23:38.902455   12269 ssh_runner.go:195] Run: systemctl --version
	I1029 08:23:38.902516   12269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:23:38.923014   12269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:23:39.031150   12269 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:23:39.031232   12269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:23:39.061851   12269 cri.go:89] found id: "ee8944794e8050551c59ad29f1e3e516d055471261079ddb98ad1b18d85f8d62"
	I1029 08:23:39.061879   12269 cri.go:89] found id: "32f7a28d2d03b12a04f38527066ab5cdace38391dbd7e81a25de50ac95ea189d"
	I1029 08:23:39.061885   12269 cri.go:89] found id: "239ec534461a096cf94705920f445c2256dd88aaa699d21479b90194a3837f9b"
	I1029 08:23:39.061889   12269 cri.go:89] found id: "0555333eb38f561643aa85f1253ffad88ad99d3734392074f633148511ce3081"
	I1029 08:23:39.061892   12269 cri.go:89] found id: "b7ebb9338f4b71874206cc6aa8143d99e673a9cca1b219506840b748ac705b60"
	I1029 08:23:39.061896   12269 cri.go:89] found id: "861cd9d17d1a25a1554adc0ae16a417206ae256ce09efb8acbb8fbdfd34b1733"
	I1029 08:23:39.061899   12269 cri.go:89] found id: "4f38205b7fd4d543287d30e2654b8b18c64c68ac9936ecc6de021a7f18188c65"
	I1029 08:23:39.061902   12269 cri.go:89] found id: "080445adfb2737e11888db144d48240f8f457851f5dd235ba8ac2de2d56a6f02"
	I1029 08:23:39.061906   12269 cri.go:89] found id: "525382941facb4662c4472842cc827c30b969d0ba588b1fe4bd1ab1a8be43d02"
	I1029 08:23:39.061913   12269 cri.go:89] found id: "c8fe768126de326968797194f6739f6b4dffc8edd42a7e3da422ab55d6c46d31"
	I1029 08:23:39.061917   12269 cri.go:89] found id: "444ef3af30aeb87e6a1cef7fe02d50c1eeb0628ff4d53cf0d6d76407448af653"
	I1029 08:23:39.061920   12269 cri.go:89] found id: "a89be2ad8c3cbb179996675c4f579261e541010fed42ffff33e36d897e051d6f"
	I1029 08:23:39.061923   12269 cri.go:89] found id: "03254ae94d330d94320842bf836194b38de9aa234ed810020b44739f573b3a1f"
	I1029 08:23:39.061927   12269 cri.go:89] found id: "380a55eebf3cdcc226730df7d2181cf069c2ff5fa31ba1bd7f7ecbdbb1a00c53"
	I1029 08:23:39.061930   12269 cri.go:89] found id: "dbc66dc27a6154e247feb539a4148136556f003707e138999e20759485b59218"
	I1029 08:23:39.061935   12269 cri.go:89] found id: "561fd8a7601359c5c1ac06320b6c023314bf2d9c888338eb6db0cb74cf760ad6"
	I1029 08:23:39.061944   12269 cri.go:89] found id: "bc4be5a012bc9f8e39fa97fa9dfd2e049f3d28d71ee13ad96c3db8f172403a78"
	I1029 08:23:39.061949   12269 cri.go:89] found id: "fb05a0521754d6e3abce78732cce5547c6dfcfddd236c0d82161786ca543e41b"
	I1029 08:23:39.061952   12269 cri.go:89] found id: "bdb041cabd34f35415d6aa99e1925090bda9745d10bfd7e1e4a7ce721cfb04de"
	I1029 08:23:39.061956   12269 cri.go:89] found id: "349c9103101d7725e278ac33a2d7d761e55f35837d834c1cec2dbbfe3add8d47"
	I1029 08:23:39.061968   12269 cri.go:89] found id: "6fb3b53c30069d80f0ce7ee16f7eedad1c380d15ce86f571d6bbe59e3f920970"
	I1029 08:23:39.061976   12269 cri.go:89] found id: "df417919fab6fd07c060b65a32c9220edeee697791536b0fa3a6e2baada5b377"
	I1029 08:23:39.061980   12269 cri.go:89] found id: "2a94afd232256c9970e37e3077aaf55baec83c1b05f44ac0cb94c7d529e48160"
	I1029 08:23:39.061983   12269 cri.go:89] found id: ""
	I1029 08:23:39.062039   12269 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 08:23:39.077337   12269 out.go:203] 
	W1029 08:23:39.080293   12269 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:23:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:23:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 08:23:39.080330   12269 out.go:285] * 
	* 
	W1029 08:23:39.084612   12269 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:23:39.087467   12269 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-757691 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.86s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.48s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.423608ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-757691
addons_test.go:332: (dbg) Run:  kubectl --context addons-757691 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-757691 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-757691 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (258.963178ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:24:08.042049   13341 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:24:08.042306   13341 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:24:08.042319   13341 out.go:374] Setting ErrFile to fd 2...
	I1029 08:24:08.042325   13341 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:24:08.042662   13341 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 08:24:08.042999   13341 mustload.go:66] Loading cluster: addons-757691
	I1029 08:24:08.043357   13341 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:24:08.043376   13341 addons.go:607] checking whether the cluster is paused
	I1029 08:24:08.043478   13341 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:24:08.043493   13341 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:24:08.043954   13341 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:24:08.062559   13341 ssh_runner.go:195] Run: systemctl --version
	I1029 08:24:08.062621   13341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:24:08.081272   13341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:24:08.187835   13341 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:24:08.187926   13341 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:24:08.216740   13341 cri.go:89] found id: "ee8944794e8050551c59ad29f1e3e516d055471261079ddb98ad1b18d85f8d62"
	I1029 08:24:08.216761   13341 cri.go:89] found id: "32f7a28d2d03b12a04f38527066ab5cdace38391dbd7e81a25de50ac95ea189d"
	I1029 08:24:08.216766   13341 cri.go:89] found id: "239ec534461a096cf94705920f445c2256dd88aaa699d21479b90194a3837f9b"
	I1029 08:24:08.216770   13341 cri.go:89] found id: "0555333eb38f561643aa85f1253ffad88ad99d3734392074f633148511ce3081"
	I1029 08:24:08.216773   13341 cri.go:89] found id: "b7ebb9338f4b71874206cc6aa8143d99e673a9cca1b219506840b748ac705b60"
	I1029 08:24:08.216777   13341 cri.go:89] found id: "861cd9d17d1a25a1554adc0ae16a417206ae256ce09efb8acbb8fbdfd34b1733"
	I1029 08:24:08.216780   13341 cri.go:89] found id: "4f38205b7fd4d543287d30e2654b8b18c64c68ac9936ecc6de021a7f18188c65"
	I1029 08:24:08.216783   13341 cri.go:89] found id: "080445adfb2737e11888db144d48240f8f457851f5dd235ba8ac2de2d56a6f02"
	I1029 08:24:08.216786   13341 cri.go:89] found id: "525382941facb4662c4472842cc827c30b969d0ba588b1fe4bd1ab1a8be43d02"
	I1029 08:24:08.216793   13341 cri.go:89] found id: "c8fe768126de326968797194f6739f6b4dffc8edd42a7e3da422ab55d6c46d31"
	I1029 08:24:08.216797   13341 cri.go:89] found id: "444ef3af30aeb87e6a1cef7fe02d50c1eeb0628ff4d53cf0d6d76407448af653"
	I1029 08:24:08.216800   13341 cri.go:89] found id: "a89be2ad8c3cbb179996675c4f579261e541010fed42ffff33e36d897e051d6f"
	I1029 08:24:08.216803   13341 cri.go:89] found id: "03254ae94d330d94320842bf836194b38de9aa234ed810020b44739f573b3a1f"
	I1029 08:24:08.216806   13341 cri.go:89] found id: "380a55eebf3cdcc226730df7d2181cf069c2ff5fa31ba1bd7f7ecbdbb1a00c53"
	I1029 08:24:08.216809   13341 cri.go:89] found id: "dbc66dc27a6154e247feb539a4148136556f003707e138999e20759485b59218"
	I1029 08:24:08.216819   13341 cri.go:89] found id: "561fd8a7601359c5c1ac06320b6c023314bf2d9c888338eb6db0cb74cf760ad6"
	I1029 08:24:08.216827   13341 cri.go:89] found id: "bc4be5a012bc9f8e39fa97fa9dfd2e049f3d28d71ee13ad96c3db8f172403a78"
	I1029 08:24:08.216860   13341 cri.go:89] found id: "fb05a0521754d6e3abce78732cce5547c6dfcfddd236c0d82161786ca543e41b"
	I1029 08:24:08.216869   13341 cri.go:89] found id: "bdb041cabd34f35415d6aa99e1925090bda9745d10bfd7e1e4a7ce721cfb04de"
	I1029 08:24:08.216873   13341 cri.go:89] found id: "349c9103101d7725e278ac33a2d7d761e55f35837d834c1cec2dbbfe3add8d47"
	I1029 08:24:08.216878   13341 cri.go:89] found id: "6fb3b53c30069d80f0ce7ee16f7eedad1c380d15ce86f571d6bbe59e3f920970"
	I1029 08:24:08.216881   13341 cri.go:89] found id: "df417919fab6fd07c060b65a32c9220edeee697791536b0fa3a6e2baada5b377"
	I1029 08:24:08.216884   13341 cri.go:89] found id: "2a94afd232256c9970e37e3077aaf55baec83c1b05f44ac0cb94c7d529e48160"
	I1029 08:24:08.216888   13341 cri.go:89] found id: ""
	I1029 08:24:08.216935   13341 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 08:24:08.231658   13341 out.go:203] 
	W1029 08:24:08.234537   13341 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:24:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:24:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 08:24:08.234558   13341 out.go:285] * 
	* 
	W1029 08:24:08.238903   13341 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:24:08.241830   13341 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-757691 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.48s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (145.46s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-757691 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-757691 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-757691 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [c4cce251-d90e-40ca-bce0-09de2bd3721d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [c4cce251-d90e-40ca-bce0-09de2bd3721d] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.015369794s
I1029 08:24:00.449019    4550 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-757691 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-757691 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.433276597s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-757691 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-757691 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-757691
helpers_test.go:243: (dbg) docker inspect addons-757691:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bf6f603e4d4f443578279c81f1a6dab5536260b406a0927d33375716db0cda33",
	        "Created": "2025-10-29T08:21:00.554043188Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 5703,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T08:21:00.623778281Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/bf6f603e4d4f443578279c81f1a6dab5536260b406a0927d33375716db0cda33/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bf6f603e4d4f443578279c81f1a6dab5536260b406a0927d33375716db0cda33/hostname",
	        "HostsPath": "/var/lib/docker/containers/bf6f603e4d4f443578279c81f1a6dab5536260b406a0927d33375716db0cda33/hosts",
	        "LogPath": "/var/lib/docker/containers/bf6f603e4d4f443578279c81f1a6dab5536260b406a0927d33375716db0cda33/bf6f603e4d4f443578279c81f1a6dab5536260b406a0927d33375716db0cda33-json.log",
	        "Name": "/addons-757691",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-757691:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-757691",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bf6f603e4d4f443578279c81f1a6dab5536260b406a0927d33375716db0cda33",
	                "LowerDir": "/var/lib/docker/overlay2/0343dbfabbff552f1b5518a68d37b37ac7bed7cbe479ac99b476cc92a9c688a3-init/diff:/var/lib/docker/overlay2/512c003c31c6c889d7180677d0a7d04f9641d651bb7c0f829b95ca7e47c2836c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0343dbfabbff552f1b5518a68d37b37ac7bed7cbe479ac99b476cc92a9c688a3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0343dbfabbff552f1b5518a68d37b37ac7bed7cbe479ac99b476cc92a9c688a3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0343dbfabbff552f1b5518a68d37b37ac7bed7cbe479ac99b476cc92a9c688a3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-757691",
	                "Source": "/var/lib/docker/volumes/addons-757691/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-757691",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-757691",
	                "name.minikube.sigs.k8s.io": "addons-757691",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5e700d532a9402dbf516f0e568893bb7dc91a62b88f9bd6512ec824d3c9df021",
	            "SandboxKey": "/var/run/docker/netns/5e700d532a94",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-757691": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:f8:84:6c:98:8a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fdc142442313fd40792fd7b16d636299c5bcbfc81c2066be50b2e2d2b3915e19",
	                    "EndpointID": "aa60f742795b48b6136657f03a241a7aa9362d6eb5e10ab2a35ccc1e76d01a8c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-757691",
	                        "bf6f603e4d4f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-757691 -n addons-757691
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-757691 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-757691 logs -n 25: (1.521049252s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-024522                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-024522 │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │ 29 Oct 25 08:20 UTC │
	│ start   │ --download-only -p binary-mirror-301132 --alsologtostderr --binary-mirror http://127.0.0.1:43123 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-301132   │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │                     │
	│ delete  │ -p binary-mirror-301132                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-301132   │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │ 29 Oct 25 08:20 UTC │
	│ addons  │ disable dashboard -p addons-757691                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-757691          │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │                     │
	│ addons  │ enable dashboard -p addons-757691                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-757691          │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │                     │
	│ start   │ -p addons-757691 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-757691          │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │ 29 Oct 25 08:23 UTC │
	│ addons  │ addons-757691 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-757691          │ jenkins │ v1.37.0 │ 29 Oct 25 08:23 UTC │                     │
	│ addons  │ addons-757691 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-757691          │ jenkins │ v1.37.0 │ 29 Oct 25 08:23 UTC │                     │
	│ addons  │ enable headlamp -p addons-757691 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-757691          │ jenkins │ v1.37.0 │ 29 Oct 25 08:23 UTC │                     │
	│ addons  │ addons-757691 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-757691          │ jenkins │ v1.37.0 │ 29 Oct 25 08:23 UTC │                     │
	│ ip      │ addons-757691 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-757691          │ jenkins │ v1.37.0 │ 29 Oct 25 08:23 UTC │ 29 Oct 25 08:23 UTC │
	│ addons  │ addons-757691 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-757691          │ jenkins │ v1.37.0 │ 29 Oct 25 08:23 UTC │                     │
	│ addons  │ addons-757691 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-757691          │ jenkins │ v1.37.0 │ 29 Oct 25 08:23 UTC │                     │
	│ addons  │ addons-757691 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-757691          │ jenkins │ v1.37.0 │ 29 Oct 25 08:23 UTC │                     │
	│ ssh     │ addons-757691 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-757691          │ jenkins │ v1.37.0 │ 29 Oct 25 08:24 UTC │                     │
	│ addons  │ addons-757691 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-757691          │ jenkins │ v1.37.0 │ 29 Oct 25 08:24 UTC │                     │
	│ addons  │ addons-757691 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-757691          │ jenkins │ v1.37.0 │ 29 Oct 25 08:24 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-757691                                                                                                                                                                                                                                                                                                                                                                                           │ addons-757691          │ jenkins │ v1.37.0 │ 29 Oct 25 08:24 UTC │ 29 Oct 25 08:24 UTC │
	│ addons  │ addons-757691 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-757691          │ jenkins │ v1.37.0 │ 29 Oct 25 08:24 UTC │                     │
	│ addons  │ addons-757691 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-757691          │ jenkins │ v1.37.0 │ 29 Oct 25 08:24 UTC │                     │
	│ addons  │ addons-757691 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-757691          │ jenkins │ v1.37.0 │ 29 Oct 25 08:24 UTC │                     │
	│ ssh     │ addons-757691 ssh cat /opt/local-path-provisioner/pvc-e1dc20ec-fec2-44cc-ac2b-af307dd1a9cc_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-757691          │ jenkins │ v1.37.0 │ 29 Oct 25 08:24 UTC │ 29 Oct 25 08:24 UTC │
	│ addons  │ addons-757691 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-757691          │ jenkins │ v1.37.0 │ 29 Oct 25 08:24 UTC │                     │
	│ addons  │ addons-757691 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-757691          │ jenkins │ v1.37.0 │ 29 Oct 25 08:24 UTC │                     │
	│ ip      │ addons-757691 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-757691          │ jenkins │ v1.37.0 │ 29 Oct 25 08:26 UTC │ 29 Oct 25 08:26 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 08:20:34
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 08:20:34.389589    5303 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:20:34.389717    5303 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:20:34.389727    5303 out.go:374] Setting ErrFile to fd 2...
	I1029 08:20:34.389733    5303 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:20:34.390441    5303 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 08:20:34.390957    5303 out.go:368] Setting JSON to false
	I1029 08:20:34.391691    5303 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":186,"bootTime":1761725848,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1029 08:20:34.391758    5303 start.go:143] virtualization:  
	I1029 08:20:34.395063    5303 out.go:179] * [addons-757691] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1029 08:20:34.398798    5303 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 08:20:34.398882    5303 notify.go:221] Checking for updates...
	I1029 08:20:34.404717    5303 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 08:20:34.407567    5303 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 08:20:34.410356    5303 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	I1029 08:20:34.413197    5303 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1029 08:20:34.416112    5303 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 08:20:34.419135    5303 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 08:20:34.450005    5303 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1029 08:20:34.450133    5303 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:20:34.506691    5303 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-29 08:20:34.497025876 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 08:20:34.506817    5303 docker.go:319] overlay module found
	I1029 08:20:34.510088    5303 out.go:179] * Using the docker driver based on user configuration
	I1029 08:20:34.513069    5303 start.go:309] selected driver: docker
	I1029 08:20:34.513092    5303 start.go:930] validating driver "docker" against <nil>
	I1029 08:20:34.513106    5303 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 08:20:34.513798    5303 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:20:34.577263    5303 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-29 08:20:34.567724839 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 08:20:34.577422    5303 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1029 08:20:34.577655    5303 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 08:20:34.580638    5303 out.go:179] * Using Docker driver with root privileges
	I1029 08:20:34.583505    5303 cni.go:84] Creating CNI manager for ""
	I1029 08:20:34.583565    5303 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 08:20:34.583577    5303 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1029 08:20:34.583670    5303 start.go:353] cluster config:
	{Name:addons-757691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-757691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1029 08:20:34.586722    5303 out.go:179] * Starting "addons-757691" primary control-plane node in "addons-757691" cluster
	I1029 08:20:34.589517    5303 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 08:20:34.592504    5303 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 08:20:34.595356    5303 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:20:34.595405    5303 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1029 08:20:34.595417    5303 cache.go:59] Caching tarball of preloaded images
	I1029 08:20:34.595506    5303 preload.go:233] Found /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1029 08:20:34.595521    5303 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 08:20:34.595846    5303 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/config.json ...
	I1029 08:20:34.595872    5303 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/config.json: {Name:mk483fc51061c028c7d42c844695485f626c1c3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:20:34.596038    5303 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 08:20:34.610967    5303 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1029 08:20:34.611079    5303 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1029 08:20:34.611102    5303 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1029 08:20:34.611110    5303 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1029 08:20:34.611118    5303 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1029 08:20:34.611124    5303 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1029 08:20:52.364681    5303 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1029 08:20:52.364724    5303 cache.go:233] Successfully downloaded all kic artifacts
	I1029 08:20:52.364755    5303 start.go:360] acquireMachinesLock for addons-757691: {Name:mk8f6dfa288988e6cf9ac15aaaee63ecff02dc5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 08:20:52.364876    5303 start.go:364] duration metric: took 99.293µs to acquireMachinesLock for "addons-757691"
	I1029 08:20:52.364910    5303 start.go:93] Provisioning new machine with config: &{Name:addons-757691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-757691 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 08:20:52.364999    5303 start.go:125] createHost starting for "" (driver="docker")
	I1029 08:20:52.368434    5303 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1029 08:20:52.368681    5303 start.go:159] libmachine.API.Create for "addons-757691" (driver="docker")
	I1029 08:20:52.368726    5303 client.go:173] LocalClient.Create starting
	I1029 08:20:52.368850    5303 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem
	I1029 08:20:53.095366    5303 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem
	I1029 08:20:53.527989    5303 cli_runner.go:164] Run: docker network inspect addons-757691 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1029 08:20:53.544511    5303 cli_runner.go:211] docker network inspect addons-757691 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1029 08:20:53.544591    5303 network_create.go:284] running [docker network inspect addons-757691] to gather additional debugging logs...
	I1029 08:20:53.544610    5303 cli_runner.go:164] Run: docker network inspect addons-757691
	W1029 08:20:53.560372    5303 cli_runner.go:211] docker network inspect addons-757691 returned with exit code 1
	I1029 08:20:53.560403    5303 network_create.go:287] error running [docker network inspect addons-757691]: docker network inspect addons-757691: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-757691 not found
	I1029 08:20:53.560429    5303 network_create.go:289] output of [docker network inspect addons-757691]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-757691 not found
	
	** /stderr **
	I1029 08:20:53.560538    5303 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 08:20:53.577238    5303 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018ebe70}
	I1029 08:20:53.577290    5303 network_create.go:124] attempt to create docker network addons-757691 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1029 08:20:53.577345    5303 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-757691 addons-757691
	I1029 08:20:53.633409    5303 network_create.go:108] docker network addons-757691 192.168.49.0/24 created
	I1029 08:20:53.633441    5303 kic.go:121] calculated static IP "192.168.49.2" for the "addons-757691" container
	I1029 08:20:53.633534    5303 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1029 08:20:53.650546    5303 cli_runner.go:164] Run: docker volume create addons-757691 --label name.minikube.sigs.k8s.io=addons-757691 --label created_by.minikube.sigs.k8s.io=true
	I1029 08:20:53.670256    5303 oci.go:103] Successfully created a docker volume addons-757691
	I1029 08:20:53.670346    5303 cli_runner.go:164] Run: docker run --rm --name addons-757691-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-757691 --entrypoint /usr/bin/test -v addons-757691:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1029 08:20:55.992225    5303 cli_runner.go:217] Completed: docker run --rm --name addons-757691-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-757691 --entrypoint /usr/bin/test -v addons-757691:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (2.321826911s)
	I1029 08:20:55.992254    5303 oci.go:107] Successfully prepared a docker volume addons-757691
	I1029 08:20:55.992278    5303 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:20:55.992295    5303 kic.go:194] Starting extracting preloaded images to volume ...
	I1029 08:20:55.992389    5303 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-757691:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1029 08:21:00.463875    5303 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-757691:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.471449436s)
	I1029 08:21:00.463917    5303 kic.go:203] duration metric: took 4.471616118s to extract preloaded images to volume ...
	W1029 08:21:00.464137    5303 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1029 08:21:00.464264    5303 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1029 08:21:00.537162    5303 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-757691 --name addons-757691 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-757691 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-757691 --network addons-757691 --ip 192.168.49.2 --volume addons-757691:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1029 08:21:00.887662    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Running}}
	I1029 08:21:00.910604    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:00.936037    5303 cli_runner.go:164] Run: docker exec addons-757691 stat /var/lib/dpkg/alternatives/iptables
	I1029 08:21:00.994545    5303 oci.go:144] the created container "addons-757691" has a running status.
	I1029 08:21:00.994574    5303 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa...
	I1029 08:21:02.082350    5303 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1029 08:21:02.108756    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:02.126538    5303 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1029 08:21:02.126560    5303 kic_runner.go:114] Args: [docker exec --privileged addons-757691 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1029 08:21:02.167522    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:02.188062    5303 machine.go:94] provisionDockerMachine start ...
	I1029 08:21:02.188172    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:02.205849    5303 main.go:143] libmachine: Using SSH client type: native
	I1029 08:21:02.206187    5303 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1029 08:21:02.206203    5303 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 08:21:02.356048    5303 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-757691
	
	I1029 08:21:02.356086    5303 ubuntu.go:182] provisioning hostname "addons-757691"
	I1029 08:21:02.356161    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:02.375115    5303 main.go:143] libmachine: Using SSH client type: native
	I1029 08:21:02.375427    5303 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1029 08:21:02.375439    5303 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-757691 && echo "addons-757691" | sudo tee /etc/hostname
	I1029 08:21:02.533957    5303 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-757691
	
	I1029 08:21:02.534037    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:02.552090    5303 main.go:143] libmachine: Using SSH client type: native
	I1029 08:21:02.552447    5303 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1029 08:21:02.552473    5303 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-757691' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-757691/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-757691' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 08:21:02.700735    5303 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 08:21:02.700830    5303 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-2763/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-2763/.minikube}
	I1029 08:21:02.700887    5303 ubuntu.go:190] setting up certificates
	I1029 08:21:02.700921    5303 provision.go:84] configureAuth start
	I1029 08:21:02.701018    5303 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-757691
	I1029 08:21:02.718349    5303 provision.go:143] copyHostCerts
	I1029 08:21:02.718431    5303 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem (1082 bytes)
	I1029 08:21:02.718549    5303 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem (1123 bytes)
	I1029 08:21:02.718613    5303 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem (1679 bytes)
	I1029 08:21:02.718659    5303 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem org=jenkins.addons-757691 san=[127.0.0.1 192.168.49.2 addons-757691 localhost minikube]
	I1029 08:21:02.952766    5303 provision.go:177] copyRemoteCerts
	I1029 08:21:02.952847    5303 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 08:21:02.952888    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:02.970015    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:03.075996    5303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 08:21:03.093709    5303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1029 08:21:03.111830    5303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 08:21:03.129239    5303 provision.go:87] duration metric: took 428.29073ms to configureAuth
	I1029 08:21:03.129263    5303 ubuntu.go:206] setting minikube options for container-runtime
	I1029 08:21:03.129449    5303 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:21:03.129555    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:03.147491    5303 main.go:143] libmachine: Using SSH client type: native
	I1029 08:21:03.147801    5303 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1029 08:21:03.147815    5303 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 08:21:03.403990    5303 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 08:21:03.404007    5303 machine.go:97] duration metric: took 1.215919808s to provisionDockerMachine
	I1029 08:21:03.404017    5303 client.go:176] duration metric: took 11.035279731s to LocalClient.Create
	I1029 08:21:03.404030    5303 start.go:167] duration metric: took 11.035352118s to libmachine.API.Create "addons-757691"
	I1029 08:21:03.404038    5303 start.go:293] postStartSetup for "addons-757691" (driver="docker")
	I1029 08:21:03.404048    5303 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 08:21:03.404125    5303 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 08:21:03.404167    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:03.427208    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:03.532438    5303 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 08:21:03.535683    5303 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 08:21:03.535713    5303 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 08:21:03.535725    5303 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/addons for local assets ...
	I1029 08:21:03.535794    5303 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/files for local assets ...
	I1029 08:21:03.535821    5303 start.go:296] duration metric: took 131.777723ms for postStartSetup
	I1029 08:21:03.536162    5303 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-757691
	I1029 08:21:03.553350    5303 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/config.json ...
	I1029 08:21:03.553638    5303 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 08:21:03.553692    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:03.571554    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:03.673127    5303 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 08:21:03.677880    5303 start.go:128] duration metric: took 11.312866034s to createHost
	I1029 08:21:03.677905    5303 start.go:83] releasing machines lock for "addons-757691", held for 11.313013326s
	I1029 08:21:03.677973    5303 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-757691
	I1029 08:21:03.696042    5303 ssh_runner.go:195] Run: cat /version.json
	I1029 08:21:03.696119    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:03.696396    5303 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 08:21:03.696457    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:03.719562    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:03.723286    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:03.824201    5303 ssh_runner.go:195] Run: systemctl --version
	I1029 08:21:03.919108    5303 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 08:21:03.958568    5303 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 08:21:03.963034    5303 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 08:21:03.963122    5303 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 08:21:03.992611    5303 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1029 08:21:03.992633    5303 start.go:496] detecting cgroup driver to use...
	I1029 08:21:03.992670    5303 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1029 08:21:03.992756    5303 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 08:21:04.013637    5303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 08:21:04.027192    5303 docker.go:218] disabling cri-docker service (if available) ...
	I1029 08:21:04.027254    5303 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 08:21:04.045358    5303 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 08:21:04.063801    5303 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 08:21:04.185562    5303 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 08:21:04.305550    5303 docker.go:234] disabling docker service ...
	I1029 08:21:04.305681    5303 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 08:21:04.325555    5303 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 08:21:04.338645    5303 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 08:21:04.460546    5303 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 08:21:04.579318    5303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 08:21:04.591488    5303 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 08:21:04.605202    5303 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 08:21:04.605281    5303 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:21:04.614346    5303 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1029 08:21:04.614425    5303 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:21:04.623194    5303 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:21:04.631780    5303 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:21:04.640944    5303 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 08:21:04.649197    5303 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:21:04.657882    5303 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:21:04.671328    5303 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:21:04.679937    5303 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 08:21:04.687394    5303 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1029 08:21:04.687459    5303 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1029 08:21:04.701946    5303 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 08:21:04.709355    5303 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:21:04.832603    5303 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 08:21:04.973274    5303 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 08:21:04.973427    5303 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 08:21:04.977126    5303 start.go:564] Will wait 60s for crictl version
	I1029 08:21:04.977234    5303 ssh_runner.go:195] Run: which crictl
	I1029 08:21:04.980613    5303 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 08:21:05.005905    5303 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 08:21:05.006072    5303 ssh_runner.go:195] Run: crio --version
	I1029 08:21:05.038988    5303 ssh_runner.go:195] Run: crio --version
	I1029 08:21:05.068954    5303 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 08:21:05.071881    5303 cli_runner.go:164] Run: docker network inspect addons-757691 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 08:21:05.088660    5303 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1029 08:21:05.092508    5303 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:21:05.102214    5303 kubeadm.go:884] updating cluster {Name:addons-757691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-757691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 08:21:05.102334    5303 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:21:05.102391    5303 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 08:21:05.138936    5303 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 08:21:05.138960    5303 crio.go:433] Images already preloaded, skipping extraction
	I1029 08:21:05.139021    5303 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 08:21:05.165053    5303 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 08:21:05.165076    5303 cache_images.go:86] Images are preloaded, skipping loading
	I1029 08:21:05.165086    5303 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1029 08:21:05.165214    5303 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-757691 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-757691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 08:21:05.165303    5303 ssh_runner.go:195] Run: crio config
	I1029 08:21:05.218862    5303 cni.go:84] Creating CNI manager for ""
	I1029 08:21:05.218888    5303 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 08:21:05.218906    5303 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1029 08:21:05.218930    5303 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-757691 NodeName:addons-757691 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 08:21:05.219055    5303 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-757691"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 08:21:05.219135    5303 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 08:21:05.226698    5303 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 08:21:05.226807    5303 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 08:21:05.234162    5303 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1029 08:21:05.246806    5303 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 08:21:05.259843    5303 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1029 08:21:05.272170    5303 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1029 08:21:05.275793    5303 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:21:05.285344    5303 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:21:05.402628    5303 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 08:21:05.418124    5303 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691 for IP: 192.168.49.2
	I1029 08:21:05.418193    5303 certs.go:195] generating shared ca certs ...
	I1029 08:21:05.418227    5303 certs.go:227] acquiring lock for ca certs: {Name:mk715611293338c39436fe9072c5ce0c2fb993a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:05.418376    5303 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key
	I1029 08:21:05.709824    5303 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt ...
	I1029 08:21:05.709856    5303 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt: {Name:mk72169ccc25d4f6f0cad61bec2049a2dde9625a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:05.710080    5303 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key ...
	I1029 08:21:05.710096    5303 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key: {Name:mkae08a7d3fefa5e6571e0738456d0b61fd12ce0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:05.710189    5303 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key
	I1029 08:21:05.871204    5303 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt ...
	I1029 08:21:05.871234    5303 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt: {Name:mkf21d0bebeaa7c7b9c32d969e54b889f5ddf480 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:05.871399    5303 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key ...
	I1029 08:21:05.871414    5303 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key: {Name:mkedc9b0619550237fb62c786cd16da5244a6baa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:05.871493    5303 certs.go:257] generating profile certs ...
	I1029 08:21:05.871555    5303 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.key
	I1029 08:21:05.871573    5303 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt with IP's: []
	I1029 08:21:05.976304    5303 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt ...
	I1029 08:21:05.976337    5303 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: {Name:mkb3d88a06621a28a140eadcc69a46fa07f7f7f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:05.976528    5303 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.key ...
	I1029 08:21:05.976542    5303 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.key: {Name:mka6c52d70446e43df54bc9e976975be9ab1708c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:05.976624    5303 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/apiserver.key.38555e1e
	I1029 08:21:05.976648    5303 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/apiserver.crt.38555e1e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1029 08:21:06.296197    5303 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/apiserver.crt.38555e1e ...
	I1029 08:21:06.296228    5303 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/apiserver.crt.38555e1e: {Name:mk420f0377df64628682fdeb88f7df4473686247 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:06.296421    5303 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/apiserver.key.38555e1e ...
	I1029 08:21:06.296436    5303 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/apiserver.key.38555e1e: {Name:mk6bf3445c2ecf0a18c24fb42b640fb4db7eafeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:06.296521    5303 certs.go:382] copying /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/apiserver.crt.38555e1e -> /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/apiserver.crt
	I1029 08:21:06.296600    5303 certs.go:386] copying /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/apiserver.key.38555e1e -> /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/apiserver.key
	I1029 08:21:06.296660    5303 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/proxy-client.key
	I1029 08:21:06.296684    5303 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/proxy-client.crt with IP's: []
	I1029 08:21:06.833764    5303 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/proxy-client.crt ...
	I1029 08:21:06.833793    5303 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/proxy-client.crt: {Name:mk7b299db1251ef2ab798abf32d639d45537eb34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:06.833964    5303 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/proxy-client.key ...
	I1029 08:21:06.833975    5303 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/proxy-client.key: {Name:mk0494e471f0e51e062592824e50500af09883dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:06.834196    5303 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem (1679 bytes)
	I1029 08:21:06.834235    5303 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem (1082 bytes)
	I1029 08:21:06.834263    5303 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem (1123 bytes)
	I1029 08:21:06.834294    5303 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem (1679 bytes)
	I1029 08:21:06.834916    5303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 08:21:06.862520    5303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 08:21:06.882250    5303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 08:21:06.903616    5303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1029 08:21:06.922467    5303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1029 08:21:06.940166    5303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 08:21:06.957558    5303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 08:21:06.974121    5303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1029 08:21:06.991516    5303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 08:21:07.010895    5303 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 08:21:07.024094    5303 ssh_runner.go:195] Run: openssl version
	I1029 08:21:07.030497    5303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 08:21:07.039085    5303 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:21:07.042708    5303 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:21:07.042800    5303 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:21:07.083544    5303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 08:21:07.091607    5303 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 08:21:07.094913    5303 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1029 08:21:07.094960    5303 kubeadm.go:401] StartCluster: {Name:addons-757691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-757691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 08:21:07.095065    5303 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:21:07.095135    5303 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:21:07.123146    5303 cri.go:89] found id: ""
	I1029 08:21:07.123263    5303 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 08:21:07.130888    5303 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1029 08:21:07.138483    5303 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1029 08:21:07.138601    5303 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1029 08:21:07.146512    5303 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1029 08:21:07.146578    5303 kubeadm.go:158] found existing configuration files:
	
	I1029 08:21:07.146634    5303 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1029 08:21:07.154264    5303 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1029 08:21:07.154326    5303 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1029 08:21:07.161577    5303 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1029 08:21:07.169202    5303 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1029 08:21:07.169348    5303 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1029 08:21:07.176271    5303 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1029 08:21:07.183662    5303 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1029 08:21:07.183752    5303 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1029 08:21:07.190882    5303 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1029 08:21:07.198232    5303 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1029 08:21:07.198311    5303 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1029 08:21:07.205223    5303 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1029 08:21:07.243360    5303 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1029 08:21:07.243599    5303 kubeadm.go:319] [preflight] Running pre-flight checks
	I1029 08:21:07.268248    5303 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1029 08:21:07.268411    5303 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1029 08:21:07.268488    5303 kubeadm.go:319] OS: Linux
	I1029 08:21:07.268568    5303 kubeadm.go:319] CGROUPS_CPU: enabled
	I1029 08:21:07.268658    5303 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1029 08:21:07.268756    5303 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1029 08:21:07.268815    5303 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1029 08:21:07.268869    5303 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1029 08:21:07.268922    5303 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1029 08:21:07.268972    5303 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1029 08:21:07.269027    5303 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1029 08:21:07.269078    5303 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1029 08:21:07.333493    5303 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1029 08:21:07.333665    5303 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1029 08:21:07.333800    5303 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1029 08:21:07.343717    5303 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1029 08:21:07.350353    5303 out.go:252]   - Generating certificates and keys ...
	I1029 08:21:07.350464    5303 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1029 08:21:07.350546    5303 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1029 08:21:07.865851    5303 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1029 08:21:08.065573    5303 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1029 08:21:08.234567    5303 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1029 08:21:08.607398    5303 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1029 08:21:08.959531    5303 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1029 08:21:08.959926    5303 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-757691 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1029 08:21:09.485625    5303 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1029 08:21:09.485982    5303 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-757691 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1029 08:21:10.044126    5303 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1029 08:21:10.225313    5303 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1029 08:21:11.610594    5303 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1029 08:21:11.610885    5303 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1029 08:21:12.903936    5303 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1029 08:21:13.674659    5303 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1029 08:21:14.082470    5303 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1029 08:21:14.539988    5303 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1029 08:21:14.609614    5303 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1029 08:21:14.610212    5303 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1029 08:21:14.612982    5303 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1029 08:21:14.616259    5303 out.go:252]   - Booting up control plane ...
	I1029 08:21:14.616393    5303 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1029 08:21:14.616491    5303 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1029 08:21:14.617498    5303 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1029 08:21:14.632782    5303 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1029 08:21:14.633161    5303 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1029 08:21:14.641558    5303 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1029 08:21:14.641894    5303 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1029 08:21:14.642074    5303 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1029 08:21:14.771305    5303 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1029 08:21:14.771456    5303 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1029 08:21:15.272480    5303 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.52914ms
	I1029 08:21:15.275967    5303 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1029 08:21:15.276064    5303 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1029 08:21:15.276418    5303 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1029 08:21:15.276582    5303 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1029 08:21:19.641110    5303 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.364710349s
	I1029 08:21:20.058013    5303 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.781967061s
	I1029 08:21:21.777587    5303 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501511678s
	I1029 08:21:21.797564    5303 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1029 08:21:21.812798    5303 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1029 08:21:21.827634    5303 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1029 08:21:21.827900    5303 kubeadm.go:319] [mark-control-plane] Marking the node addons-757691 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1029 08:21:21.840846    5303 kubeadm.go:319] [bootstrap-token] Using token: k6kkly.9wi997fhhyt35ncy
	I1029 08:21:21.845941    5303 out.go:252]   - Configuring RBAC rules ...
	I1029 08:21:21.846086    5303 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1029 08:21:21.847552    5303 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1029 08:21:21.855219    5303 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1029 08:21:21.861410    5303 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1029 08:21:21.865309    5303 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1029 08:21:21.869320    5303 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1029 08:21:22.184244    5303 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1029 08:21:22.620060    5303 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1029 08:21:23.184553    5303 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1029 08:21:23.185610    5303 kubeadm.go:319] 
	I1029 08:21:23.185707    5303 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1029 08:21:23.185719    5303 kubeadm.go:319] 
	I1029 08:21:23.185819    5303 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1029 08:21:23.185832    5303 kubeadm.go:319] 
	I1029 08:21:23.185859    5303 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1029 08:21:23.185921    5303 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1029 08:21:23.185974    5303 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1029 08:21:23.185979    5303 kubeadm.go:319] 
	I1029 08:21:23.186036    5303 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1029 08:21:23.186040    5303 kubeadm.go:319] 
	I1029 08:21:23.186089    5303 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1029 08:21:23.186094    5303 kubeadm.go:319] 
	I1029 08:21:23.186149    5303 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1029 08:21:23.186228    5303 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1029 08:21:23.186299    5303 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1029 08:21:23.186304    5303 kubeadm.go:319] 
	I1029 08:21:23.186401    5303 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1029 08:21:23.186488    5303 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1029 08:21:23.186493    5303 kubeadm.go:319] 
	I1029 08:21:23.186580    5303 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token k6kkly.9wi997fhhyt35ncy \
	I1029 08:21:23.186695    5303 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da4a5b90580f0f492e24f667f5676cec258425f736b389045aee440db981859e \
	I1029 08:21:23.186717    5303 kubeadm.go:319] 	--control-plane 
	I1029 08:21:23.186723    5303 kubeadm.go:319] 
	I1029 08:21:23.186811    5303 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1029 08:21:23.186817    5303 kubeadm.go:319] 
	I1029 08:21:23.186902    5303 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token k6kkly.9wi997fhhyt35ncy \
	I1029 08:21:23.187009    5303 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da4a5b90580f0f492e24f667f5676cec258425f736b389045aee440db981859e 
	I1029 08:21:23.189464    5303 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1029 08:21:23.189729    5303 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1029 08:21:23.189855    5303 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1029 08:21:23.189866    5303 cni.go:84] Creating CNI manager for ""
	I1029 08:21:23.189874    5303 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 08:21:23.193048    5303 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1029 08:21:23.195904    5303 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1029 08:21:23.199610    5303 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1029 08:21:23.199627    5303 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1029 08:21:23.211972    5303 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1029 08:21:23.509499    5303 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1029 08:21:23.509594    5303 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:21:23.509624    5303 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-757691 minikube.k8s.io/updated_at=2025_10_29T08_21_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac minikube.k8s.io/name=addons-757691 minikube.k8s.io/primary=true
	I1029 08:21:23.649350    5303 ops.go:34] apiserver oom_adj: -16
	I1029 08:21:23.649492    5303 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:21:24.149539    5303 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:21:24.649635    5303 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:21:25.149625    5303 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:21:25.649769    5303 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:21:26.150337    5303 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:21:26.650528    5303 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:21:27.150173    5303 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:21:27.649644    5303 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:21:28.150384    5303 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:21:28.284785    5303 kubeadm.go:1114] duration metric: took 4.775256007s to wait for elevateKubeSystemPrivileges
	I1029 08:21:28.284818    5303 kubeadm.go:403] duration metric: took 21.189860871s to StartCluster
	I1029 08:21:28.284835    5303 settings.go:142] acquiring lock: {Name:mkeba1c0d8e3656c2a05b6b6f1f81184498df216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:28.284942    5303 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 08:21:28.285309    5303 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:28.285499    5303 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 08:21:28.285663    5303 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1029 08:21:28.285918    5303 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:21:28.285946    5303 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1029 08:21:28.286018    5303 addons.go:70] Setting yakd=true in profile "addons-757691"
	I1029 08:21:28.286031    5303 addons.go:239] Setting addon yakd=true in "addons-757691"
	I1029 08:21:28.286052    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.286500    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.287090    5303 addons.go:70] Setting metrics-server=true in profile "addons-757691"
	I1029 08:21:28.287109    5303 addons.go:239] Setting addon metrics-server=true in "addons-757691"
	I1029 08:21:28.287132    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.287535    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.288792    5303 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-757691"
	I1029 08:21:28.288865    5303 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-757691"
	I1029 08:21:28.288906    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.291181    5303 addons.go:70] Setting registry=true in profile "addons-757691"
	I1029 08:21:28.291410    5303 addons.go:239] Setting addon registry=true in "addons-757691"
	I1029 08:21:28.291691    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.292209    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.293100    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.291324    5303 addons.go:70] Setting registry-creds=true in profile "addons-757691"
	I1029 08:21:28.298563    5303 addons.go:239] Setting addon registry-creds=true in "addons-757691"
	I1029 08:21:28.298606    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.299068    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.293243    5303 out.go:179] * Verifying Kubernetes components...
	I1029 08:21:28.290727    5303 addons.go:70] Setting cloud-spanner=true in profile "addons-757691"
	I1029 08:21:28.304715    5303 addons.go:239] Setting addon cloud-spanner=true in "addons-757691"
	I1029 08:21:28.304786    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.305329    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.316151    5303 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:21:28.290733    5303 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-757691"
	I1029 08:21:28.317947    5303 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-757691"
	I1029 08:21:28.317982    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.318442    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.290740    5303 addons.go:70] Setting default-storageclass=true in profile "addons-757691"
	I1029 08:21:28.329550    5303 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-757691"
	I1029 08:21:28.329879    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.290744    5303 addons.go:70] Setting gcp-auth=true in profile "addons-757691"
	I1029 08:21:28.345186    5303 mustload.go:66] Loading cluster: addons-757691
	I1029 08:21:28.345393    5303 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:21:28.345635    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.290747    5303 addons.go:70] Setting ingress=true in profile "addons-757691"
	I1029 08:21:28.353863    5303 addons.go:239] Setting addon ingress=true in "addons-757691"
	I1029 08:21:28.353941    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.354479    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.290750    5303 addons.go:70] Setting ingress-dns=true in profile "addons-757691"
	I1029 08:21:28.388720    5303 addons.go:239] Setting addon ingress-dns=true in "addons-757691"
	I1029 08:21:28.388780    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.389228    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.399832    5303 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1029 08:21:28.290754    5303 addons.go:70] Setting inspektor-gadget=true in profile "addons-757691"
	I1029 08:21:28.405516    5303 addons.go:239] Setting addon inspektor-gadget=true in "addons-757691"
	I1029 08:21:28.405555    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.406013    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.413037    5303 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1029 08:21:28.413067    5303 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1029 08:21:28.413133    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:28.434997    5303 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1029 08:21:28.291330    5303 addons.go:70] Setting storage-provisioner=true in profile "addons-757691"
	I1029 08:21:28.436423    5303 addons.go:239] Setting addon storage-provisioner=true in "addons-757691"
	I1029 08:21:28.436462    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.436955    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.440727    5303 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1029 08:21:28.440749    5303 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1029 08:21:28.440825    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:28.291334    5303 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-757691"
	I1029 08:21:28.472582    5303 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-757691"
	I1029 08:21:28.472894    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.291337    5303 addons.go:70] Setting volcano=true in profile "addons-757691"
	I1029 08:21:28.484866    5303 addons.go:239] Setting addon volcano=true in "addons-757691"
	I1029 08:21:28.484904    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.485392    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.291340    5303 addons.go:70] Setting volumesnapshots=true in profile "addons-757691"
	I1029 08:21:28.505712    5303 addons.go:239] Setting addon volumesnapshots=true in "addons-757691"
	I1029 08:21:28.505752    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.506225    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.290717    5303 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-757691"
	I1029 08:21:28.516727    5303 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-757691"
	I1029 08:21:28.516776    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.517242    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.559045    5303 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1029 08:21:28.564489    5303 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1029 08:21:28.564516    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1029 08:21:28.564583    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:28.577825    5303 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1029 08:21:28.607862    5303 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1029 08:21:28.611632    5303 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1029 08:21:28.611766    5303 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1029 08:21:28.611821    5303 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1029 08:21:28.642730    5303 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1029 08:21:28.642808    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1029 08:21:28.642909    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:28.650234    5303 addons.go:239] Setting addon default-storageclass=true in "addons-757691"
	I1029 08:21:28.650328    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.650928    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.611918    5303 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1029 08:21:28.658885    5303 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1029 08:21:28.659563    5303 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1029 08:21:28.660840    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:28.660924    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.662494    5303 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1029 08:21:28.662501    5303 out.go:179]   - Using image docker.io/registry:3.0.0
	I1029 08:21:28.662538    5303 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1029 08:21:28.668466    5303 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1029 08:21:28.669662    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1029 08:21:28.669738    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:28.692457    5303 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1029 08:21:28.692533    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1029 08:21:28.692644    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:28.724141    5303 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-757691"
	I1029 08:21:28.724245    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.724960    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.754551    5303 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1029 08:21:28.754726    5303 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 08:21:28.758316    5303 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1029 08:21:28.758553    5303 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 08:21:28.758594    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1029 08:21:28.758687    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:28.765502    5303 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1029 08:21:28.768459    5303 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1029 08:21:28.769213    5303 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1029 08:21:28.772153    5303 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1029 08:21:28.772289    5303 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1029 08:21:28.772342    5303 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1029 08:21:28.772455    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:28.781002    5303 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1029 08:21:28.781032    5303 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1029 08:21:28.781097    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:28.804623    5303 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1029 08:21:28.804644    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1029 08:21:28.804717    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	W1029 08:21:28.839899    5303 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1029 08:21:28.840417    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:28.843472    5303 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 08:21:28.843981    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:28.849680    5303 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1029 08:21:28.849822    5303 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1029 08:21:28.850302    5303 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1029 08:21:28.854136    5303 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1029 08:21:28.854159    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1029 08:21:28.854226    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:28.854422    5303 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1029 08:21:28.854432    5303 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1029 08:21:28.854481    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:28.880804    5303 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1029 08:21:28.880823    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1029 08:21:28.880879    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:28.922592    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:28.923779    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:28.940569    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:28.946850    5303 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1029 08:21:28.946870    5303 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1029 08:21:28.946930    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:28.968418    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:28.994088    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:28.996129    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:29.006144    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:29.007297    5303 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1029 08:21:29.009933    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:29.015607    5303 out.go:179]   - Using image docker.io/busybox:stable
	I1029 08:21:29.021148    5303 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1029 08:21:29.021173    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1029 08:21:29.021242    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:29.066156    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:29.069254    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:29.076529    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	W1029 08:21:29.077302    5303 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1029 08:21:29.077330    5303 retry.go:31] will retry after 294.845567ms: ssh: handshake failed: EOF
	W1029 08:21:29.083265    5303 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1029 08:21:29.083291    5303 retry.go:31] will retry after 242.646007ms: ssh: handshake failed: EOF
	I1029 08:21:29.095202    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:29.450130    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1029 08:21:29.452268    5303 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1029 08:21:29.452425    5303 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1029 08:21:29.478528    5303 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1029 08:21:29.478606    5303 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1029 08:21:29.494687    5303 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:29.494760    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1029 08:21:29.519787    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1029 08:21:29.531382    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1029 08:21:29.535743    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1029 08:21:29.637123    5303 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1029 08:21:29.637206    5303 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1029 08:21:29.648222    5303 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1029 08:21:29.648295    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1029 08:21:29.652133    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1029 08:21:29.683485    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 08:21:29.689346    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:29.698834    5303 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1029 08:21:29.698909    5303 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1029 08:21:29.703898    5303 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1029 08:21:29.703972    5303 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1029 08:21:29.754875    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1029 08:21:29.757287    5303 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1029 08:21:29.757308    5303 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1029 08:21:29.785151    5303 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1029 08:21:29.785225    5303 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1029 08:21:29.787712    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1029 08:21:29.801990    5303 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1029 08:21:29.802065    5303 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1029 08:21:29.831807    5303 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1029 08:21:29.831882    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1029 08:21:29.870708    5303 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1029 08:21:29.870772    5303 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1029 08:21:29.909250    5303 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1029 08:21:29.909330    5303 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1029 08:21:29.954008    5303 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1029 08:21:29.954086    5303 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1029 08:21:29.982981    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1029 08:21:29.984152    5303 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1029 08:21:29.984224    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1029 08:21:30.102500    5303 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1029 08:21:30.102574    5303 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1029 08:21:30.158012    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1029 08:21:30.166576    5303 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1029 08:21:30.166656    5303 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1029 08:21:30.168750    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1029 08:21:30.196519    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1029 08:21:30.220289    5303 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1029 08:21:30.220386    5303 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1029 08:21:30.284286    5303 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.624693372s)
	I1029 08:21:30.284531    5303 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1029 08:21:30.284472    5303 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.440979295s)
	I1029 08:21:30.285385    5303 node_ready.go:35] waiting up to 6m0s for node "addons-757691" to be "Ready" ...
	I1029 08:21:30.352145    5303 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1029 08:21:30.352231    5303 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1029 08:21:30.444793    5303 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1029 08:21:30.444865    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1029 08:21:30.589275    5303 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1029 08:21:30.589348    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1029 08:21:30.695340    5303 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1029 08:21:30.695413    5303 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1029 08:21:30.788922    5303 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-757691" context rescaled to 1 replicas
	I1029 08:21:30.844829    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1029 08:21:30.915429    5303 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1029 08:21:30.915509    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1029 08:21:31.108295    5303 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1029 08:21:31.108394    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1029 08:21:31.282959    5303 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1029 08:21:31.283033    5303 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1029 08:21:31.453243    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1029 08:21:32.310941    5303 node_ready.go:57] node "addons-757691" has "Ready":"False" status (will retry)
	I1029 08:21:33.121204    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.601333119s)
	I1029 08:21:34.357668    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.826202652s)
	I1029 08:21:34.357750    5303 addons.go:480] Verifying addon ingress=true in "addons-757691"
	I1029 08:21:34.358157    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.674603312s)
	I1029 08:21:34.357908    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.82208598s)
	I1029 08:21:34.357932    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.705721433s)
	I1029 08:21:34.358309    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.668897236s)
	W1029 08:21:34.358325    5303 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:34.358339    5303 retry.go:31] will retry after 351.870836ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:34.358392    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.603498327s)
	I1029 08:21:34.358421    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.570641107s)
	I1029 08:21:34.358456    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.375405175s)
	I1029 08:21:34.358463    5303 addons.go:480] Verifying addon registry=true in "addons-757691"
	I1029 08:21:34.358682    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.200597336s)
	I1029 08:21:34.358729    5303 addons.go:480] Verifying addon metrics-server=true in "addons-757691"
	I1029 08:21:34.358800    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.189976737s)
	I1029 08:21:34.358854    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.162255651s)
	I1029 08:21:34.359160    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.514251823s)
	W1029 08:21:34.359188    5303 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1029 08:21:34.359202    5303 retry.go:31] will retry after 131.577103ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1029 08:21:34.363044    5303 out.go:179] * Verifying ingress addon...
	I1029 08:21:34.364953    5303 out.go:179] * Verifying registry addon...
	I1029 08:21:34.365080    5303 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-757691 service yakd-dashboard -n yakd-dashboard
	
	I1029 08:21:34.368932    5303 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1029 08:21:34.368985    5303 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1029 08:21:34.377415    5303 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1029 08:21:34.377434    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:34.383054    5303 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1029 08:21:34.383075    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:34.491135    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1029 08:21:34.706889    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.253551589s)
	I1029 08:21:34.706925    5303 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-757691"
	I1029 08:21:34.710000    5303 out.go:179] * Verifying csi-hostpath-driver addon...
	I1029 08:21:34.710422    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:34.713604    5303 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1029 08:21:34.723180    5303 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1029 08:21:34.723206    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:21:34.789265    5303 node_ready.go:57] node "addons-757691" has "Ready":"False" status (will retry)
	I1029 08:21:34.876962    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:34.878183    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:35.217358    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:35.373630    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:35.374557    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:35.717570    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:35.874885    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:35.875037    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:36.216754    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:36.276513    5303 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1029 08:21:36.276610    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:36.294850    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:36.372569    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:36.372726    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:36.417430    5303 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1029 08:21:36.430273    5303 addons.go:239] Setting addon gcp-auth=true in "addons-757691"
	I1029 08:21:36.430317    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:36.430769    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:36.451089    5303 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1029 08:21:36.451138    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:36.471447    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:36.716672    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:36.872204    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:36.872585    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:37.217493    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:21:37.289473    5303 node_ready.go:57] node "addons-757691" has "Ready":"False" status (will retry)
	I1029 08:21:37.339228    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.848048819s)
	I1029 08:21:37.339341    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.628894739s)
	W1029 08:21:37.339379    5303 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:37.339403    5303 retry.go:31] will retry after 325.928364ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:37.342525    5303 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1029 08:21:37.345448    5303 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1029 08:21:37.348245    5303 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1029 08:21:37.348266    5303 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1029 08:21:37.361452    5303 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1029 08:21:37.361517    5303 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1029 08:21:37.373879    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:37.374532    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:37.377337    5303 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1029 08:21:37.377357    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1029 08:21:37.390248    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1029 08:21:37.665625    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:37.717274    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:37.885725    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:37.905942    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:37.969973    5303 addons.go:480] Verifying addon gcp-auth=true in "addons-757691"
	I1029 08:21:37.973037    5303 out.go:179] * Verifying gcp-auth addon...
	I1029 08:21:37.976842    5303 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1029 08:21:37.995443    5303 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1029 08:21:37.995479    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:38.217583    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:38.372647    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:38.373243    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:38.480434    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1029 08:21:38.642917    5303 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:38.642949    5303 retry.go:31] will retry after 480.232558ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:38.716961    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:38.872226    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:38.872623    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:38.980487    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:39.123618    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:39.217638    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:39.373252    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:39.373940    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:39.480434    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:39.717973    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:21:39.789143    5303 node_ready.go:57] node "addons-757691" has "Ready":"False" status (will retry)
	I1029 08:21:39.873516    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:39.874769    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:39.945035    5303 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:39.945064    5303 retry.go:31] will retry after 590.773258ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:39.979927    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:40.216682    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:40.372811    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:40.373163    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:40.480120    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:40.536330    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:40.718078    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:40.874274    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:40.875000    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:40.979919    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:41.217197    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:21:41.367576    5303 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:41.367607    5303 retry.go:31] will retry after 976.675145ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:41.372528    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:41.372967    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:41.479646    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:41.716845    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:41.873564    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:41.873767    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:41.980246    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:42.217643    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:21:42.288785    5303 node_ready.go:57] node "addons-757691" has "Ready":"False" status (will retry)
	I1029 08:21:42.345006    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:42.374067    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:42.374808    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:42.480297    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:42.716933    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:42.874025    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:42.874718    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:42.980496    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1029 08:21:43.169181    5303 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:43.169224    5303 retry.go:31] will retry after 2.610484783s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:43.217333    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:43.371763    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:43.372254    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:43.480489    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:43.716910    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:43.872838    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:43.872998    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:43.979838    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:44.216837    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:44.372767    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:44.373380    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:44.480356    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:44.717037    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:21:44.789007    5303 node_ready.go:57] node "addons-757691" has "Ready":"False" status (will retry)
	I1029 08:21:44.872939    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:44.873076    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:44.979832    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:45.227318    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:45.374026    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:45.374541    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:45.480507    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:45.717811    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:45.779863    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:45.872636    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:45.873415    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:45.980122    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:46.216825    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:46.372509    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:46.372884    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:46.479737    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1029 08:21:46.574130    5303 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:46.574162    5303 retry.go:31] will retry after 3.939041515s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:46.716961    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:46.872604    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:46.872787    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:46.980625    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:47.216846    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:21:47.288697    5303 node_ready.go:57] node "addons-757691" has "Ready":"False" status (will retry)
	I1029 08:21:47.373173    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:47.373554    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:47.480491    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:47.717546    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:47.872349    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:47.872474    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:47.980304    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:48.218620    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:48.372950    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:48.373456    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:48.480177    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:48.717328    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:48.871924    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:48.872100    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:48.980252    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:49.217625    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:49.372419    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:49.372636    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:49.479893    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:49.717349    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:21:49.789116    5303 node_ready.go:57] node "addons-757691" has "Ready":"False" status (will retry)
	I1029 08:21:49.872053    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:49.872167    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:49.980441    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:50.217545    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:50.371813    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:50.371992    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:50.479833    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:50.514013    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:50.716570    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:50.873673    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:50.874631    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:50.980751    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:51.217825    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:21:51.372041    5303 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:51.372073    5303 retry.go:31] will retry after 3.541014901s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:51.374407    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:51.374537    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:51.480642    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:51.716980    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:51.872828    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:51.873033    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:51.979986    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:52.216873    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:21:52.288474    5303 node_ready.go:57] node "addons-757691" has "Ready":"False" status (will retry)
	I1029 08:21:52.372495    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:52.372684    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:52.480728    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:52.716251    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:52.872375    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:52.872532    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:52.980381    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:53.217343    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:53.372254    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:53.372358    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:53.480199    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:53.717596    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:53.873497    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:53.873919    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:53.980453    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:54.216447    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:21:54.288556    5303 node_ready.go:57] node "addons-757691" has "Ready":"False" status (will retry)
	I1029 08:21:54.372852    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:54.372968    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:54.480339    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:54.716390    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:54.873093    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:54.873548    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:54.913696    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:54.981310    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:55.217849    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:55.373797    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:55.374313    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:55.480503    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:55.718127    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:21:55.722446    5303 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:55.722474    5303 retry.go:31] will retry after 4.142071292s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:55.872345    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:55.872746    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:55.980738    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:56.216674    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:21:56.288748    5303 node_ready.go:57] node "addons-757691" has "Ready":"False" status (will retry)
	I1029 08:21:56.373086    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:56.373384    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:56.480599    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:56.716338    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:56.873262    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:56.873601    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:56.980518    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:57.216918    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:57.372622    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:57.373316    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:57.480257    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:57.717570    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:57.873199    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:57.873354    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:57.980639    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:58.216541    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:58.372815    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:58.372920    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:58.479730    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:58.716868    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:21:58.788466    5303 node_ready.go:57] node "addons-757691" has "Ready":"False" status (will retry)
	I1029 08:21:58.872472    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:58.873297    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:58.980469    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:59.217164    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:59.373013    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:59.373369    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:59.480737    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:59.716769    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:59.865020    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:59.875061    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:59.875391    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:59.980594    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:00.223587    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:00.374314    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:00.375816    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:00.481290    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:00.718063    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:22:00.789782    5303 node_ready.go:57] node "addons-757691" has "Ready":"False" status (will retry)
	W1029 08:22:00.861582    5303 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:00.861659    5303 retry.go:31] will retry after 7.915106874s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:00.872691    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:00.873009    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:00.979594    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:01.217251    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:01.372292    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:01.372509    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:01.484230    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:01.718010    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:01.872275    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:01.872768    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:01.980721    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:02.216684    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:02.373178    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:02.373568    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:02.480583    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:02.716434    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:02.872454    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:02.872522    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:02.980694    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:03.216466    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:22:03.289016    5303 node_ready.go:57] node "addons-757691" has "Ready":"False" status (will retry)
	I1029 08:22:03.371778    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:03.372231    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:03.480282    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:03.717895    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:03.872253    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:03.872801    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:03.980745    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:04.216869    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:04.371988    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:04.371988    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:04.480252    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:04.717285    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:04.872451    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:04.872925    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:04.979592    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:05.217373    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:22:05.289215    5303 node_ready.go:57] node "addons-757691" has "Ready":"False" status (will retry)
	I1029 08:22:05.372648    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:05.373162    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:05.479908    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:05.717075    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:05.873324    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:05.873736    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:05.980489    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:06.216706    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:06.373404    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:06.373737    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:06.480364    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:06.717329    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:06.872377    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:06.872617    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:06.980263    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:07.217080    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:07.372376    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:07.372741    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:07.480459    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:07.717660    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:22:07.788268    5303 node_ready.go:57] node "addons-757691" has "Ready":"False" status (will retry)
	I1029 08:22:07.872497    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:07.873000    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:07.979879    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:08.216652    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:08.372523    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:08.373855    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:08.479738    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:08.716877    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:08.776903    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:22:08.873680    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:08.873813    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:08.979839    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:09.243024    5303 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1029 08:22:09.243096    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:09.306667    5303 node_ready.go:49] node "addons-757691" is "Ready"
	I1029 08:22:09.306694    5303 node_ready.go:38] duration metric: took 39.021274902s for node "addons-757691" to be "Ready" ...
	I1029 08:22:09.306708    5303 api_server.go:52] waiting for apiserver process to appear ...
	I1029 08:22:09.306767    5303 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:22:09.412687    5303 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1029 08:22:09.412707    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:09.413146    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:09.507523    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:09.716984    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:09.883750    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:09.884182    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:09.982813    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:10.217105    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:10.373765    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:10.373914    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:10.455247    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.67830316s)
	W1029 08:22:10.455281    5303 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:10.455301    5303 retry.go:31] will retry after 9.191478297s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:10.455338    5303 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.148561907s)
	I1029 08:22:10.455350    5303 api_server.go:72] duration metric: took 42.16982129s to wait for apiserver process to appear ...
	I1029 08:22:10.455355    5303 api_server.go:88] waiting for apiserver healthz status ...
	I1029 08:22:10.455369    5303 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1029 08:22:10.465707    5303 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1029 08:22:10.467042    5303 api_server.go:141] control plane version: v1.34.1
	I1029 08:22:10.467100    5303 api_server.go:131] duration metric: took 11.738543ms to wait for apiserver health ...
	I1029 08:22:10.467124    5303 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 08:22:10.472198    5303 system_pods.go:59] 19 kube-system pods found
	I1029 08:22:10.472303    5303 system_pods.go:61] "coredns-66bc5c9577-bzfbh" [1bc13dfd-dff8-4eeb-b155-3569e43ad89e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 08:22:10.472397    5303 system_pods.go:61] "csi-hostpath-attacher-0" [d705e0ea-40e4-437c-a6de-956ca2c6c06d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1029 08:22:10.472429    5303 system_pods.go:61] "csi-hostpath-resizer-0" [fbeb4843-be02-431c-b113-519d9e5b9b6e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1029 08:22:10.472458    5303 system_pods.go:61] "csi-hostpathplugin-gzlfm" [8c4dbdab-2138-4f48-8123-b62fb8422ba4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1029 08:22:10.472480    5303 system_pods.go:61] "etcd-addons-757691" [56450451-741d-4b95-83d5-0b9e6dd58bed] Running
	I1029 08:22:10.472510    5303 system_pods.go:61] "kindnet-v4rb6" [7e0ab1e9-1820-4994-8be3-469e9a30d7ed] Running
	I1029 08:22:10.472540    5303 system_pods.go:61] "kube-apiserver-addons-757691" [3c09f332-4421-49a6-9586-3ac3977d640d] Running
	I1029 08:22:10.472570    5303 system_pods.go:61] "kube-controller-manager-addons-757691" [dd239a9e-37f2-488b-9c13-a270541d20db] Running
	I1029 08:22:10.472601    5303 system_pods.go:61] "kube-ingress-dns-minikube" [8ece08bd-0f2b-4b7d-8456-43c0d10556d7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1029 08:22:10.472640    5303 system_pods.go:61] "kube-proxy-lfn78" [3f6c6b05-b806-4322-a980-c990d22d6a56] Running
	I1029 08:22:10.472662    5303 system_pods.go:61] "kube-scheduler-addons-757691" [2abe4b63-de35-4300-a3d3-25614e0fc123] Running
	I1029 08:22:10.472690    5303 system_pods.go:61] "metrics-server-85b7d694d7-2bwkc" [23336639-b3d3-4d15-a905-a3fcfe642ab9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1029 08:22:10.472728    5303 system_pods.go:61] "nvidia-device-plugin-daemonset-k472l" [a67db85b-cb6e-4585-82c5-297b38983141] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1029 08:22:10.472758    5303 system_pods.go:61] "registry-6b586f9694-rmhqh" [206ac621-1f76-46e0-a1fa-5072bef29b87] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1029 08:22:10.472783    5303 system_pods.go:61] "registry-creds-764b6fb674-7wrll" [dc216c07-bc5f-4a39-a59b-999712532cfd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1029 08:22:10.472812    5303 system_pods.go:61] "registry-proxy-wsh7n" [a1a89be0-a861-4f65-bbf5-bd788fa6a177] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1029 08:22:10.472845    5303 system_pods.go:61] "snapshot-controller-7d9fbc56b8-46nzh" [d23ac4dc-f9a8-4706-abf5-2844753f1855] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:22:10.472878    5303 system_pods.go:61] "snapshot-controller-7d9fbc56b8-n9z4k" [cd104357-7ad8-4942-a407-885f9de51e5b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:22:10.472907    5303 system_pods.go:61] "storage-provisioner" [9eb2d64d-e37a-4c83-9b28-e64155bbbbbf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 08:22:10.472935    5303 system_pods.go:74] duration metric: took 5.788237ms to wait for pod list to return data ...
	I1029 08:22:10.472977    5303 default_sa.go:34] waiting for default service account to be created ...
	I1029 08:22:10.484635    5303 default_sa.go:45] found service account: "default"
	I1029 08:22:10.484702    5303 default_sa.go:55] duration metric: took 11.705016ms for default service account to be created ...
	I1029 08:22:10.484735    5303 system_pods.go:116] waiting for k8s-apps to be running ...
	I1029 08:22:10.485401    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:10.507673    5303 system_pods.go:86] 19 kube-system pods found
	I1029 08:22:10.507761    5303 system_pods.go:89] "coredns-66bc5c9577-bzfbh" [1bc13dfd-dff8-4eeb-b155-3569e43ad89e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 08:22:10.507785    5303 system_pods.go:89] "csi-hostpath-attacher-0" [d705e0ea-40e4-437c-a6de-956ca2c6c06d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1029 08:22:10.507831    5303 system_pods.go:89] "csi-hostpath-resizer-0" [fbeb4843-be02-431c-b113-519d9e5b9b6e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1029 08:22:10.507863    5303 system_pods.go:89] "csi-hostpathplugin-gzlfm" [8c4dbdab-2138-4f48-8123-b62fb8422ba4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1029 08:22:10.507888    5303 system_pods.go:89] "etcd-addons-757691" [56450451-741d-4b95-83d5-0b9e6dd58bed] Running
	I1029 08:22:10.507914    5303 system_pods.go:89] "kindnet-v4rb6" [7e0ab1e9-1820-4994-8be3-469e9a30d7ed] Running
	I1029 08:22:10.507947    5303 system_pods.go:89] "kube-apiserver-addons-757691" [3c09f332-4421-49a6-9586-3ac3977d640d] Running
	I1029 08:22:10.507976    5303 system_pods.go:89] "kube-controller-manager-addons-757691" [dd239a9e-37f2-488b-9c13-a270541d20db] Running
	I1029 08:22:10.508003    5303 system_pods.go:89] "kube-ingress-dns-minikube" [8ece08bd-0f2b-4b7d-8456-43c0d10556d7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1029 08:22:10.508024    5303 system_pods.go:89] "kube-proxy-lfn78" [3f6c6b05-b806-4322-a980-c990d22d6a56] Running
	I1029 08:22:10.508058    5303 system_pods.go:89] "kube-scheduler-addons-757691" [2abe4b63-de35-4300-a3d3-25614e0fc123] Running
	I1029 08:22:10.508088    5303 system_pods.go:89] "metrics-server-85b7d694d7-2bwkc" [23336639-b3d3-4d15-a905-a3fcfe642ab9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1029 08:22:10.508117    5303 system_pods.go:89] "nvidia-device-plugin-daemonset-k472l" [a67db85b-cb6e-4585-82c5-297b38983141] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1029 08:22:10.508143    5303 system_pods.go:89] "registry-6b586f9694-rmhqh" [206ac621-1f76-46e0-a1fa-5072bef29b87] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1029 08:22:10.508174    5303 system_pods.go:89] "registry-creds-764b6fb674-7wrll" [dc216c07-bc5f-4a39-a59b-999712532cfd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1029 08:22:10.508199    5303 system_pods.go:89] "registry-proxy-wsh7n" [a1a89be0-a861-4f65-bbf5-bd788fa6a177] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1029 08:22:10.508225    5303 system_pods.go:89] "snapshot-controller-7d9fbc56b8-46nzh" [d23ac4dc-f9a8-4706-abf5-2844753f1855] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:22:10.508274    5303 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n9z4k" [cd104357-7ad8-4942-a407-885f9de51e5b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:22:10.508304    5303 system_pods.go:89] "storage-provisioner" [9eb2d64d-e37a-4c83-9b28-e64155bbbbbf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 08:22:10.508383    5303 retry.go:31] will retry after 220.584529ms: missing components: kube-dns
	I1029 08:22:10.717900    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:10.732974    5303 system_pods.go:86] 19 kube-system pods found
	I1029 08:22:10.733064    5303 system_pods.go:89] "coredns-66bc5c9577-bzfbh" [1bc13dfd-dff8-4eeb-b155-3569e43ad89e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 08:22:10.733089    5303 system_pods.go:89] "csi-hostpath-attacher-0" [d705e0ea-40e4-437c-a6de-956ca2c6c06d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1029 08:22:10.733131    5303 system_pods.go:89] "csi-hostpath-resizer-0" [fbeb4843-be02-431c-b113-519d9e5b9b6e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1029 08:22:10.733159    5303 system_pods.go:89] "csi-hostpathplugin-gzlfm" [8c4dbdab-2138-4f48-8123-b62fb8422ba4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1029 08:22:10.733185    5303 system_pods.go:89] "etcd-addons-757691" [56450451-741d-4b95-83d5-0b9e6dd58bed] Running
	I1029 08:22:10.733206    5303 system_pods.go:89] "kindnet-v4rb6" [7e0ab1e9-1820-4994-8be3-469e9a30d7ed] Running
	I1029 08:22:10.733238    5303 system_pods.go:89] "kube-apiserver-addons-757691" [3c09f332-4421-49a6-9586-3ac3977d640d] Running
	I1029 08:22:10.733264    5303 system_pods.go:89] "kube-controller-manager-addons-757691" [dd239a9e-37f2-488b-9c13-a270541d20db] Running
	I1029 08:22:10.733292    5303 system_pods.go:89] "kube-ingress-dns-minikube" [8ece08bd-0f2b-4b7d-8456-43c0d10556d7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1029 08:22:10.733321    5303 system_pods.go:89] "kube-proxy-lfn78" [3f6c6b05-b806-4322-a980-c990d22d6a56] Running
	I1029 08:22:10.733352    5303 system_pods.go:89] "kube-scheduler-addons-757691" [2abe4b63-de35-4300-a3d3-25614e0fc123] Running
	I1029 08:22:10.733378    5303 system_pods.go:89] "metrics-server-85b7d694d7-2bwkc" [23336639-b3d3-4d15-a905-a3fcfe642ab9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1029 08:22:10.733405    5303 system_pods.go:89] "nvidia-device-plugin-daemonset-k472l" [a67db85b-cb6e-4585-82c5-297b38983141] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1029 08:22:10.733432    5303 system_pods.go:89] "registry-6b586f9694-rmhqh" [206ac621-1f76-46e0-a1fa-5072bef29b87] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1029 08:22:10.733465    5303 system_pods.go:89] "registry-creds-764b6fb674-7wrll" [dc216c07-bc5f-4a39-a59b-999712532cfd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1029 08:22:10.733492    5303 system_pods.go:89] "registry-proxy-wsh7n" [a1a89be0-a861-4f65-bbf5-bd788fa6a177] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1029 08:22:10.733518    5303 system_pods.go:89] "snapshot-controller-7d9fbc56b8-46nzh" [d23ac4dc-f9a8-4706-abf5-2844753f1855] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:22:10.733549    5303 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n9z4k" [cd104357-7ad8-4942-a407-885f9de51e5b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:22:10.733580    5303 system_pods.go:89] "storage-provisioner" [9eb2d64d-e37a-4c83-9b28-e64155bbbbbf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 08:22:10.733616    5303 retry.go:31] will retry after 288.662598ms: missing components: kube-dns
	I1029 08:22:10.880324    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:10.880561    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:10.980723    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:11.027028    5303 system_pods.go:86] 19 kube-system pods found
	I1029 08:22:11.027066    5303 system_pods.go:89] "coredns-66bc5c9577-bzfbh" [1bc13dfd-dff8-4eeb-b155-3569e43ad89e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 08:22:11.027076    5303 system_pods.go:89] "csi-hostpath-attacher-0" [d705e0ea-40e4-437c-a6de-956ca2c6c06d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1029 08:22:11.027085    5303 system_pods.go:89] "csi-hostpath-resizer-0" [fbeb4843-be02-431c-b113-519d9e5b9b6e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1029 08:22:11.027092    5303 system_pods.go:89] "csi-hostpathplugin-gzlfm" [8c4dbdab-2138-4f48-8123-b62fb8422ba4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1029 08:22:11.027096    5303 system_pods.go:89] "etcd-addons-757691" [56450451-741d-4b95-83d5-0b9e6dd58bed] Running
	I1029 08:22:11.027102    5303 system_pods.go:89] "kindnet-v4rb6" [7e0ab1e9-1820-4994-8be3-469e9a30d7ed] Running
	I1029 08:22:11.027107    5303 system_pods.go:89] "kube-apiserver-addons-757691" [3c09f332-4421-49a6-9586-3ac3977d640d] Running
	I1029 08:22:11.027112    5303 system_pods.go:89] "kube-controller-manager-addons-757691" [dd239a9e-37f2-488b-9c13-a270541d20db] Running
	I1029 08:22:11.027121    5303 system_pods.go:89] "kube-ingress-dns-minikube" [8ece08bd-0f2b-4b7d-8456-43c0d10556d7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1029 08:22:11.027125    5303 system_pods.go:89] "kube-proxy-lfn78" [3f6c6b05-b806-4322-a980-c990d22d6a56] Running
	I1029 08:22:11.027131    5303 system_pods.go:89] "kube-scheduler-addons-757691" [2abe4b63-de35-4300-a3d3-25614e0fc123] Running
	I1029 08:22:11.027148    5303 system_pods.go:89] "metrics-server-85b7d694d7-2bwkc" [23336639-b3d3-4d15-a905-a3fcfe642ab9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1029 08:22:11.027157    5303 system_pods.go:89] "nvidia-device-plugin-daemonset-k472l" [a67db85b-cb6e-4585-82c5-297b38983141] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1029 08:22:11.027169    5303 system_pods.go:89] "registry-6b586f9694-rmhqh" [206ac621-1f76-46e0-a1fa-5072bef29b87] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1029 08:22:11.027176    5303 system_pods.go:89] "registry-creds-764b6fb674-7wrll" [dc216c07-bc5f-4a39-a59b-999712532cfd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1029 08:22:11.027182    5303 system_pods.go:89] "registry-proxy-wsh7n" [a1a89be0-a861-4f65-bbf5-bd788fa6a177] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1029 08:22:11.027191    5303 system_pods.go:89] "snapshot-controller-7d9fbc56b8-46nzh" [d23ac4dc-f9a8-4706-abf5-2844753f1855] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:22:11.027197    5303 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n9z4k" [cd104357-7ad8-4942-a407-885f9de51e5b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:22:11.027202    5303 system_pods.go:89] "storage-provisioner" [9eb2d64d-e37a-4c83-9b28-e64155bbbbbf] Running
	I1029 08:22:11.027220    5303 retry.go:31] will retry after 414.176369ms: missing components: kube-dns
	I1029 08:22:11.217979    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:11.373043    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:11.373277    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:11.447320    5303 system_pods.go:86] 19 kube-system pods found
	I1029 08:22:11.447358    5303 system_pods.go:89] "coredns-66bc5c9577-bzfbh" [1bc13dfd-dff8-4eeb-b155-3569e43ad89e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 08:22:11.447370    5303 system_pods.go:89] "csi-hostpath-attacher-0" [d705e0ea-40e4-437c-a6de-956ca2c6c06d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1029 08:22:11.447378    5303 system_pods.go:89] "csi-hostpath-resizer-0" [fbeb4843-be02-431c-b113-519d9e5b9b6e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1029 08:22:11.447384    5303 system_pods.go:89] "csi-hostpathplugin-gzlfm" [8c4dbdab-2138-4f48-8123-b62fb8422ba4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1029 08:22:11.447389    5303 system_pods.go:89] "etcd-addons-757691" [56450451-741d-4b95-83d5-0b9e6dd58bed] Running
	I1029 08:22:11.447394    5303 system_pods.go:89] "kindnet-v4rb6" [7e0ab1e9-1820-4994-8be3-469e9a30d7ed] Running
	I1029 08:22:11.447404    5303 system_pods.go:89] "kube-apiserver-addons-757691" [3c09f332-4421-49a6-9586-3ac3977d640d] Running
	I1029 08:22:11.447409    5303 system_pods.go:89] "kube-controller-manager-addons-757691" [dd239a9e-37f2-488b-9c13-a270541d20db] Running
	I1029 08:22:11.447425    5303 system_pods.go:89] "kube-ingress-dns-minikube" [8ece08bd-0f2b-4b7d-8456-43c0d10556d7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1029 08:22:11.447430    5303 system_pods.go:89] "kube-proxy-lfn78" [3f6c6b05-b806-4322-a980-c990d22d6a56] Running
	I1029 08:22:11.447440    5303 system_pods.go:89] "kube-scheduler-addons-757691" [2abe4b63-de35-4300-a3d3-25614e0fc123] Running
	I1029 08:22:11.447448    5303 system_pods.go:89] "metrics-server-85b7d694d7-2bwkc" [23336639-b3d3-4d15-a905-a3fcfe642ab9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1029 08:22:11.447454    5303 system_pods.go:89] "nvidia-device-plugin-daemonset-k472l" [a67db85b-cb6e-4585-82c5-297b38983141] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1029 08:22:11.447468    5303 system_pods.go:89] "registry-6b586f9694-rmhqh" [206ac621-1f76-46e0-a1fa-5072bef29b87] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1029 08:22:11.447478    5303 system_pods.go:89] "registry-creds-764b6fb674-7wrll" [dc216c07-bc5f-4a39-a59b-999712532cfd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1029 08:22:11.447484    5303 system_pods.go:89] "registry-proxy-wsh7n" [a1a89be0-a861-4f65-bbf5-bd788fa6a177] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1029 08:22:11.447493    5303 system_pods.go:89] "snapshot-controller-7d9fbc56b8-46nzh" [d23ac4dc-f9a8-4706-abf5-2844753f1855] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:22:11.447502    5303 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n9z4k" [cd104357-7ad8-4942-a407-885f9de51e5b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:22:11.447510    5303 system_pods.go:89] "storage-provisioner" [9eb2d64d-e37a-4c83-9b28-e64155bbbbbf] Running
	I1029 08:22:11.447525    5303 retry.go:31] will retry after 417.054385ms: missing components: kube-dns
	I1029 08:22:11.481306    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:11.718836    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:11.873877    5303 system_pods.go:86] 19 kube-system pods found
	I1029 08:22:11.873924    5303 system_pods.go:89] "coredns-66bc5c9577-bzfbh" [1bc13dfd-dff8-4eeb-b155-3569e43ad89e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 08:22:11.873935    5303 system_pods.go:89] "csi-hostpath-attacher-0" [d705e0ea-40e4-437c-a6de-956ca2c6c06d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1029 08:22:11.873949    5303 system_pods.go:89] "csi-hostpath-resizer-0" [fbeb4843-be02-431c-b113-519d9e5b9b6e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1029 08:22:11.873958    5303 system_pods.go:89] "csi-hostpathplugin-gzlfm" [8c4dbdab-2138-4f48-8123-b62fb8422ba4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1029 08:22:11.873963    5303 system_pods.go:89] "etcd-addons-757691" [56450451-741d-4b95-83d5-0b9e6dd58bed] Running
	I1029 08:22:11.873970    5303 system_pods.go:89] "kindnet-v4rb6" [7e0ab1e9-1820-4994-8be3-469e9a30d7ed] Running
	I1029 08:22:11.873985    5303 system_pods.go:89] "kube-apiserver-addons-757691" [3c09f332-4421-49a6-9586-3ac3977d640d] Running
	I1029 08:22:11.873994    5303 system_pods.go:89] "kube-controller-manager-addons-757691" [dd239a9e-37f2-488b-9c13-a270541d20db] Running
	I1029 08:22:11.874004    5303 system_pods.go:89] "kube-ingress-dns-minikube" [8ece08bd-0f2b-4b7d-8456-43c0d10556d7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1029 08:22:11.874017    5303 system_pods.go:89] "kube-proxy-lfn78" [3f6c6b05-b806-4322-a980-c990d22d6a56] Running
	I1029 08:22:11.874026    5303 system_pods.go:89] "kube-scheduler-addons-757691" [2abe4b63-de35-4300-a3d3-25614e0fc123] Running
	I1029 08:22:11.874033    5303 system_pods.go:89] "metrics-server-85b7d694d7-2bwkc" [23336639-b3d3-4d15-a905-a3fcfe642ab9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1029 08:22:11.874064    5303 system_pods.go:89] "nvidia-device-plugin-daemonset-k472l" [a67db85b-cb6e-4585-82c5-297b38983141] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1029 08:22:11.874076    5303 system_pods.go:89] "registry-6b586f9694-rmhqh" [206ac621-1f76-46e0-a1fa-5072bef29b87] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1029 08:22:11.874084    5303 system_pods.go:89] "registry-creds-764b6fb674-7wrll" [dc216c07-bc5f-4a39-a59b-999712532cfd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1029 08:22:11.874098    5303 system_pods.go:89] "registry-proxy-wsh7n" [a1a89be0-a861-4f65-bbf5-bd788fa6a177] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1029 08:22:11.874108    5303 system_pods.go:89] "snapshot-controller-7d9fbc56b8-46nzh" [d23ac4dc-f9a8-4706-abf5-2844753f1855] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:22:11.874119    5303 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n9z4k" [cd104357-7ad8-4942-a407-885f9de51e5b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:22:11.874128    5303 system_pods.go:89] "storage-provisioner" [9eb2d64d-e37a-4c83-9b28-e64155bbbbbf] Running
	I1029 08:22:11.874144    5303 retry.go:31] will retry after 458.682438ms: missing components: kube-dns
	I1029 08:22:11.877549    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:11.882839    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:11.987237    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:12.218049    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:12.336951    5303 system_pods.go:86] 19 kube-system pods found
	I1029 08:22:12.336987    5303 system_pods.go:89] "coredns-66bc5c9577-bzfbh" [1bc13dfd-dff8-4eeb-b155-3569e43ad89e] Running
	I1029 08:22:12.336999    5303 system_pods.go:89] "csi-hostpath-attacher-0" [d705e0ea-40e4-437c-a6de-956ca2c6c06d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1029 08:22:12.337007    5303 system_pods.go:89] "csi-hostpath-resizer-0" [fbeb4843-be02-431c-b113-519d9e5b9b6e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1029 08:22:12.337015    5303 system_pods.go:89] "csi-hostpathplugin-gzlfm" [8c4dbdab-2138-4f48-8123-b62fb8422ba4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1029 08:22:12.337022    5303 system_pods.go:89] "etcd-addons-757691" [56450451-741d-4b95-83d5-0b9e6dd58bed] Running
	I1029 08:22:12.337028    5303 system_pods.go:89] "kindnet-v4rb6" [7e0ab1e9-1820-4994-8be3-469e9a30d7ed] Running
	I1029 08:22:12.337034    5303 system_pods.go:89] "kube-apiserver-addons-757691" [3c09f332-4421-49a6-9586-3ac3977d640d] Running
	I1029 08:22:12.337038    5303 system_pods.go:89] "kube-controller-manager-addons-757691" [dd239a9e-37f2-488b-9c13-a270541d20db] Running
	I1029 08:22:12.337052    5303 system_pods.go:89] "kube-ingress-dns-minikube" [8ece08bd-0f2b-4b7d-8456-43c0d10556d7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1029 08:22:12.337061    5303 system_pods.go:89] "kube-proxy-lfn78" [3f6c6b05-b806-4322-a980-c990d22d6a56] Running
	I1029 08:22:12.337067    5303 system_pods.go:89] "kube-scheduler-addons-757691" [2abe4b63-de35-4300-a3d3-25614e0fc123] Running
	I1029 08:22:12.337081    5303 system_pods.go:89] "metrics-server-85b7d694d7-2bwkc" [23336639-b3d3-4d15-a905-a3fcfe642ab9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1029 08:22:12.337088    5303 system_pods.go:89] "nvidia-device-plugin-daemonset-k472l" [a67db85b-cb6e-4585-82c5-297b38983141] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1029 08:22:12.337097    5303 system_pods.go:89] "registry-6b586f9694-rmhqh" [206ac621-1f76-46e0-a1fa-5072bef29b87] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1029 08:22:12.337106    5303 system_pods.go:89] "registry-creds-764b6fb674-7wrll" [dc216c07-bc5f-4a39-a59b-999712532cfd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1029 08:22:12.337115    5303 system_pods.go:89] "registry-proxy-wsh7n" [a1a89be0-a861-4f65-bbf5-bd788fa6a177] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1029 08:22:12.337121    5303 system_pods.go:89] "snapshot-controller-7d9fbc56b8-46nzh" [d23ac4dc-f9a8-4706-abf5-2844753f1855] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:22:12.337128    5303 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n9z4k" [cd104357-7ad8-4942-a407-885f9de51e5b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:22:12.337132    5303 system_pods.go:89] "storage-provisioner" [9eb2d64d-e37a-4c83-9b28-e64155bbbbbf] Running
	I1029 08:22:12.337142    5303 system_pods.go:126] duration metric: took 1.852389453s to wait for k8s-apps to be running ...
	I1029 08:22:12.337156    5303 system_svc.go:44] waiting for kubelet service to be running ....
	I1029 08:22:12.337213    5303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 08:22:12.350472    5303 system_svc.go:56] duration metric: took 13.30884ms WaitForService to wait for kubelet
	I1029 08:22:12.350500    5303 kubeadm.go:587] duration metric: took 44.064969054s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 08:22:12.350529    5303 node_conditions.go:102] verifying NodePressure condition ...
	I1029 08:22:12.353529    5303 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1029 08:22:12.353562    5303 node_conditions.go:123] node cpu capacity is 2
	I1029 08:22:12.353575    5303 node_conditions.go:105] duration metric: took 3.040401ms to run NodePressure ...
	I1029 08:22:12.353587    5303 start.go:242] waiting for startup goroutines ...
	I1029 08:22:12.372803    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:12.372975    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:12.480099    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:12.717721    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:12.874765    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:12.875128    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:12.984624    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:13.217950    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:13.373928    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:13.374369    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:13.480390    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:13.716910    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:13.872154    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:13.872354    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:13.981098    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:14.217509    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:14.373504    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:14.373585    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:14.480470    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:14.716657    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:14.873615    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:14.873974    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:14.979537    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:15.217987    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:15.374715    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:15.375127    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:15.481620    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:15.718249    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:15.874325    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:15.874799    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:15.981126    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:16.230302    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:16.378982    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:16.379492    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:16.483375    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:16.722933    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:16.876760    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:16.876849    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:16.990827    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:17.219054    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:17.373026    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:17.373510    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:17.480752    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:17.721490    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:17.874647    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:17.874792    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:17.979817    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:18.217674    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:18.374293    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:18.374429    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:18.480567    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:18.717755    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:18.874137    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:18.874728    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:18.979820    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:19.217111    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:19.373050    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:19.373192    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:19.480164    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:19.647538    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:22:19.717267    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:19.874072    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:19.874661    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:19.981049    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:20.217583    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:20.373344    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:20.373509    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:20.480581    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:20.718384    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:20.754528    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.106953221s)
	W1029 08:22:20.754560    5303 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:20.754579    5303 retry.go:31] will retry after 27.842036107s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:20.873383    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:20.873581    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:20.980903    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:21.217601    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:21.374637    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:21.375025    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:21.480291    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:21.717914    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:21.874364    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:21.874524    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:21.980581    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:22.217289    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:22.374262    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:22.374860    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:22.479778    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:22.716672    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:22.873594    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:22.873725    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:22.979487    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:23.216798    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:23.373845    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:23.373968    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:23.479519    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:23.716694    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:23.874369    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:23.874449    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:23.980819    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:24.217520    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:24.374547    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:24.375049    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:24.481610    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:24.716520    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:24.873214    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:24.873633    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:24.980386    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:25.217220    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:25.381262    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:25.381700    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:25.480812    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:25.722165    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:25.876514    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:25.876648    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:25.992673    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:26.217439    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:26.376332    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:26.376766    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:26.481389    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:26.718247    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:26.875165    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:26.875624    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:26.981610    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:27.225712    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:27.373911    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:27.374043    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:27.479973    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:27.718330    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:27.873870    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:27.874241    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:27.980465    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:28.218095    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:28.375279    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:28.376717    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:28.480974    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:28.718273    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:28.874782    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:28.875278    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:28.980919    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:29.218200    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:29.379845    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:29.380363    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:29.481452    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:29.718078    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:29.872884    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:29.873006    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:29.979953    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:30.217385    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:30.373966    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:30.374083    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:30.480294    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:30.717816    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:30.873646    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:30.873740    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:30.980755    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:31.217218    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:31.373622    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:31.373735    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:31.480834    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:31.717696    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:31.873047    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:31.873304    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:31.980714    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:32.217212    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:32.373464    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:32.373788    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:32.480290    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:32.717546    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:32.873978    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:32.874091    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:32.980277    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:33.217174    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:33.372573    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:33.372754    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:33.480831    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:33.717017    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:33.873281    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:33.873704    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:33.981227    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:34.217775    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:34.373676    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:34.373762    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:34.481691    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:34.717785    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:34.873604    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:34.873784    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:34.981222    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:35.219959    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:35.377315    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:35.377419    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:35.480447    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:35.717239    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:35.874753    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:35.875084    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:35.980184    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:36.218388    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:36.375967    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:36.377644    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:36.480878    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:36.717457    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:36.873760    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:36.874183    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:36.980457    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:37.223873    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:37.373072    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:37.373240    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:37.480020    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:37.717313    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:37.873351    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:37.874087    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:37.979895    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:38.217252    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:38.373438    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:38.373694    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:38.481621    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:38.717468    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:38.873356    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:38.873579    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:38.980541    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:39.221214    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:39.373018    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:39.373190    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:39.480591    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:39.718021    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:39.873419    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:39.873796    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:39.981816    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:40.223895    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:40.373853    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:40.374376    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:40.480898    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:40.718115    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:40.873357    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:40.874042    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:40.982505    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:41.217271    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:41.373003    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:41.373319    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:41.480144    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:41.718863    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:41.875145    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:41.875449    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:41.980471    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:42.220801    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:42.374260    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:42.374997    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:42.480956    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:42.717655    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:42.874800    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:42.876263    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:42.981328    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:43.217007    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:43.373857    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:43.374290    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:43.480620    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:43.716972    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:43.874256    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:43.875063    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:43.981475    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:44.218092    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:44.374209    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:44.374598    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:44.480445    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:44.718193    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:44.873218    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:44.873402    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:44.980788    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:45.218123    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:45.377266    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:45.377502    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:45.481035    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:45.717783    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:45.873926    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:45.874846    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:45.980474    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:46.217386    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:46.373463    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:46.373889    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:46.485812    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:46.717148    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:46.872917    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:46.873730    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:46.979831    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:47.217168    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:47.373745    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:47.374161    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:47.480494    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:47.718872    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:47.873849    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:47.873990    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:47.979715    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:48.218087    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:48.373794    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:48.374147    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:48.480718    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:48.597590    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:22:48.717470    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:48.873409    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:48.874525    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:48.981277    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:49.217945    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:49.374143    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:49.374561    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:49.480539    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1029 08:22:49.578617    5303 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:49.578649    5303 retry.go:31] will retry after 19.709762938s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:49.717108    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:49.874048    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:49.874215    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:49.980697    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:50.217149    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:50.374673    5303 kapi.go:107] duration metric: took 1m16.005739028s to wait for kubernetes.io/minikube-addons=registry ...
	I1029 08:22:50.375195    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:50.481055    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:50.718473    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:50.875301    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:50.980277    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:51.219727    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:51.373107    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:51.479848    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:51.718388    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:51.872528    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:51.980549    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:52.216545    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:52.373035    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:52.480925    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:52.718418    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:52.877163    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:52.981001    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:53.220463    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:53.372992    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:53.481084    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:53.717424    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:53.872627    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:53.980994    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:54.220532    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:54.380502    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:54.481949    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:54.727948    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:54.874743    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:54.996303    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:55.218074    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:55.373480    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:55.480036    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:55.717625    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:55.873448    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:55.980411    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:56.217649    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:56.377460    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:56.480467    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:56.717328    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:56.872383    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:56.980382    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:57.216774    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:57.372731    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:57.479697    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:57.717076    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:57.872082    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:57.979932    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:58.217721    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:58.373689    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:58.481150    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:58.719042    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:58.872277    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:58.979662    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:59.217145    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:59.372392    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:59.480274    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:59.717877    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:59.872273    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:59.980810    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:00.218204    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:00.373223    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:00.481114    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:00.718112    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:00.871821    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:00.979636    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:01.220170    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:01.378952    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:01.481178    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:01.718023    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:01.872409    5303 kapi.go:107] duration metric: took 1m27.503418804s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1029 08:23:01.980038    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:02.217846    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:02.481401    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:02.720706    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:02.980978    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:03.217145    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:03.480479    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:03.717151    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:03.980434    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:04.217638    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:04.480761    5303 kapi.go:107] duration metric: took 1m26.503920246s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1029 08:23:04.483742    5303 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-757691 cluster.
	I1029 08:23:04.487506    5303 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1029 08:23:04.490569    5303 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1029 08:23:04.717294    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:05.218712    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:05.717667    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:06.218731    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:06.718520    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:07.219285    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:07.717677    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:08.217454    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:08.717604    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:09.217272    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:09.289616    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:23:09.717684    5303 kapi.go:107] duration metric: took 1m35.004078754s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W1029 08:23:10.200042    5303 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 08:23:10.200139    5303 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1029 08:23:10.203438    5303 out.go:179] * Enabled addons: registry-creds, storage-provisioner-rancher, nvidia-device-plugin, cloud-spanner, ingress-dns, amd-gpu-device-plugin, storage-provisioner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1029 08:23:10.207160    5303 addons.go:515] duration metric: took 1m41.920285412s for enable addons: enabled=[registry-creds storage-provisioner-rancher nvidia-device-plugin cloud-spanner ingress-dns amd-gpu-device-plugin storage-provisioner metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1029 08:23:10.207214    5303 start.go:247] waiting for cluster config update ...
	I1029 08:23:10.207239    5303 start.go:256] writing updated cluster config ...
	I1029 08:23:10.209388    5303 ssh_runner.go:195] Run: rm -f paused
	I1029 08:23:10.214355    5303 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 08:23:10.218131    5303 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bzfbh" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:10.223838    5303 pod_ready.go:94] pod "coredns-66bc5c9577-bzfbh" is "Ready"
	I1029 08:23:10.223879    5303 pod_ready.go:86] duration metric: took 5.72535ms for pod "coredns-66bc5c9577-bzfbh" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:10.226114    5303 pod_ready.go:83] waiting for pod "etcd-addons-757691" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:10.230538    5303 pod_ready.go:94] pod "etcd-addons-757691" is "Ready"
	I1029 08:23:10.230567    5303 pod_ready.go:86] duration metric: took 4.423938ms for pod "etcd-addons-757691" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:10.233189    5303 pod_ready.go:83] waiting for pod "kube-apiserver-addons-757691" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:10.238247    5303 pod_ready.go:94] pod "kube-apiserver-addons-757691" is "Ready"
	I1029 08:23:10.238274    5303 pod_ready.go:86] duration metric: took 5.060088ms for pod "kube-apiserver-addons-757691" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:10.240864    5303 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-757691" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:10.618818    5303 pod_ready.go:94] pod "kube-controller-manager-addons-757691" is "Ready"
	I1029 08:23:10.618851    5303 pod_ready.go:86] duration metric: took 377.95873ms for pod "kube-controller-manager-addons-757691" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:10.818544    5303 pod_ready.go:83] waiting for pod "kube-proxy-lfn78" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:11.218482    5303 pod_ready.go:94] pod "kube-proxy-lfn78" is "Ready"
	I1029 08:23:11.218514    5303 pod_ready.go:86] duration metric: took 399.940401ms for pod "kube-proxy-lfn78" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:11.419051    5303 pod_ready.go:83] waiting for pod "kube-scheduler-addons-757691" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:11.818590    5303 pod_ready.go:94] pod "kube-scheduler-addons-757691" is "Ready"
	I1029 08:23:11.818618    5303 pod_ready.go:86] duration metric: took 399.539059ms for pod "kube-scheduler-addons-757691" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:11.818633    5303 pod_ready.go:40] duration metric: took 1.604244151s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 08:23:12.241372    5303 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1029 08:23:12.244697    5303 out.go:179] * Done! kubectl is now configured to use "addons-757691" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 29 08:26:01 addons-757691 crio[833]: time="2025-10-29T08:26:01.202335883Z" level=info msg="Removed container 624c007a074f908e33e4515651cab537992dc70ced86b0d2d3be7017570f00f4: kube-system/registry-creds-764b6fb674-7wrll/registry-creds" id=2206a68a-2789-4394-871b-7526ea79ade6 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 08:26:12 addons-757691 crio[833]: time="2025-10-29T08:26:12.422665979Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-scv4b/POD" id=508f8c8d-2199-4633-8f7c-fc8fb83f920f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 08:26:12 addons-757691 crio[833]: time="2025-10-29T08:26:12.422737102Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 08:26:12 addons-757691 crio[833]: time="2025-10-29T08:26:12.440290361Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-scv4b Namespace:default ID:087e7a3f4af75d73646b7070c784c712e50dfae5299c1de96dcfb89908760864 UID:c595e89a-c4ce-4e32-b337-188a50796b2a NetNS:/var/run/netns/e6d93314-fea7-4742-b9cc-8c9e0059bb9b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001e74f20}] Aliases:map[]}"
	Oct 29 08:26:12 addons-757691 crio[833]: time="2025-10-29T08:26:12.442312628Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-scv4b to CNI network \"kindnet\" (type=ptp)"
	Oct 29 08:26:12 addons-757691 crio[833]: time="2025-10-29T08:26:12.460898185Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-scv4b Namespace:default ID:087e7a3f4af75d73646b7070c784c712e50dfae5299c1de96dcfb89908760864 UID:c595e89a-c4ce-4e32-b337-188a50796b2a NetNS:/var/run/netns/e6d93314-fea7-4742-b9cc-8c9e0059bb9b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001e74f20}] Aliases:map[]}"
	Oct 29 08:26:12 addons-757691 crio[833]: time="2025-10-29T08:26:12.46121658Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-scv4b for CNI network kindnet (type=ptp)"
	Oct 29 08:26:12 addons-757691 crio[833]: time="2025-10-29T08:26:12.47088755Z" level=info msg="Ran pod sandbox 087e7a3f4af75d73646b7070c784c712e50dfae5299c1de96dcfb89908760864 with infra container: default/hello-world-app-5d498dc89-scv4b/POD" id=508f8c8d-2199-4633-8f7c-fc8fb83f920f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 08:26:12 addons-757691 crio[833]: time="2025-10-29T08:26:12.475031958Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=a6ae887b-d178-4f71-8750-ffacc0cd584f name=/runtime.v1.ImageService/ImageStatus
	Oct 29 08:26:12 addons-757691 crio[833]: time="2025-10-29T08:26:12.47546617Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=a6ae887b-d178-4f71-8750-ffacc0cd584f name=/runtime.v1.ImageService/ImageStatus
	Oct 29 08:26:12 addons-757691 crio[833]: time="2025-10-29T08:26:12.476355862Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=a6ae887b-d178-4f71-8750-ffacc0cd584f name=/runtime.v1.ImageService/ImageStatus
	Oct 29 08:26:12 addons-757691 crio[833]: time="2025-10-29T08:26:12.482549877Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=0df5af91-3826-48d7-824c-8f40161a9eef name=/runtime.v1.ImageService/PullImage
	Oct 29 08:26:12 addons-757691 crio[833]: time="2025-10-29T08:26:12.485055145Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 29 08:26:13 addons-757691 crio[833]: time="2025-10-29T08:26:13.140405094Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=0df5af91-3826-48d7-824c-8f40161a9eef name=/runtime.v1.ImageService/PullImage
	Oct 29 08:26:13 addons-757691 crio[833]: time="2025-10-29T08:26:13.141125488Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=c4bae6ba-77dd-4bf7-b154-7185e6045d2c name=/runtime.v1.ImageService/ImageStatus
	Oct 29 08:26:13 addons-757691 crio[833]: time="2025-10-29T08:26:13.151377723Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=27bbb80c-08c1-4be7-8016-74582db20831 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 08:26:13 addons-757691 crio[833]: time="2025-10-29T08:26:13.157596321Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-scv4b/hello-world-app" id=3efe376c-943f-4690-b115-5d88efcb2c18 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 08:26:13 addons-757691 crio[833]: time="2025-10-29T08:26:13.157900759Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 08:26:13 addons-757691 crio[833]: time="2025-10-29T08:26:13.169957463Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 08:26:13 addons-757691 crio[833]: time="2025-10-29T08:26:13.170298053Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/bd99e613098e8683da28a062c8743385f0b637ce7a994e2605dad145d3f0c2d9/merged/etc/passwd: no such file or directory"
	Oct 29 08:26:13 addons-757691 crio[833]: time="2025-10-29T08:26:13.170387646Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/bd99e613098e8683da28a062c8743385f0b637ce7a994e2605dad145d3f0c2d9/merged/etc/group: no such file or directory"
	Oct 29 08:26:13 addons-757691 crio[833]: time="2025-10-29T08:26:13.170717225Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 08:26:13 addons-757691 crio[833]: time="2025-10-29T08:26:13.186927453Z" level=info msg="Created container 2dbbd60ddfaedfc993c28a2ac5dd5dcde66b3d16346a684aca74db52859b5233: default/hello-world-app-5d498dc89-scv4b/hello-world-app" id=3efe376c-943f-4690-b115-5d88efcb2c18 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 08:26:13 addons-757691 crio[833]: time="2025-10-29T08:26:13.190638026Z" level=info msg="Starting container: 2dbbd60ddfaedfc993c28a2ac5dd5dcde66b3d16346a684aca74db52859b5233" id=c7e06d00-2694-4662-9abd-a3145b9e4a66 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 08:26:13 addons-757691 crio[833]: time="2025-10-29T08:26:13.193779748Z" level=info msg="Started container" PID=7251 containerID=2dbbd60ddfaedfc993c28a2ac5dd5dcde66b3d16346a684aca74db52859b5233 description=default/hello-world-app-5d498dc89-scv4b/hello-world-app id=c7e06d00-2694-4662-9abd-a3145b9e4a66 name=/runtime.v1.RuntimeService/StartContainer sandboxID=087e7a3f4af75d73646b7070c784c712e50dfae5299c1de96dcfb89908760864
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	2dbbd60ddfaed       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   087e7a3f4af75       hello-world-app-5d498dc89-scv4b             default
	35965c8fed743       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             13 seconds ago           Exited              registry-creds                           4                   b70429b99a9bc       registry-creds-764b6fb674-7wrll             kube-system
	d130d9ccc9668       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                                              2 minutes ago            Running             nginx                                    0                   3fb506c4056fb       nginx                                       default
	988babaa55e15       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago            Running             busybox                                  0                   16cb5ae1a1f48       busybox                                     default
	ee8944794e805       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   16207fea7d35e       csi-hostpathplugin-gzlfm                    kube-system
	32f7a28d2d03b       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   16207fea7d35e       csi-hostpathplugin-gzlfm                    kube-system
	239ec534461a0       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   16207fea7d35e       csi-hostpathplugin-gzlfm                    kube-system
	0555333eb38f5       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   16207fea7d35e       csi-hostpathplugin-gzlfm                    kube-system
	69ba1f444f956       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   7a6e958924609       gcp-auth-78565c9fb4-7c65l                   gcp-auth
	86329b8c65996       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             3 minutes ago            Running             controller                               0                   3ab16108b2714       ingress-nginx-controller-675c5ddd98-8xwgl   ingress-nginx
	b7ebb9338f4b7       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   16207fea7d35e       csi-hostpathplugin-gzlfm                    kube-system
	26f10b73cd601       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            3 minutes ago            Running             gadget                                   0                   b6d229a38598d       gadget-lfsrs                                gadget
	861cd9d17d1a2       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   482babb3e37e3       registry-proxy-wsh7n                        kube-system
	4f38205b7fd4d       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   7f4a810201735       nvidia-device-plugin-daemonset-k472l        kube-system
	080445adfb273       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   16207fea7d35e       csi-hostpathplugin-gzlfm                    kube-system
	525382941facb       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   d15fd5a5e902d       snapshot-controller-7d9fbc56b8-46nzh        kube-system
	5d17cf36f0fb1       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   cc42b9677080a       yakd-dashboard-5ff678cb9-z7rr6              yakd-dashboard
	0d2bb5596e6b3       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   9f540b11d4510       local-path-provisioner-648f6765c9-t42xh     local-path-storage
	c8fe768126de3       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   8192fa2dad007       registry-6b586f9694-rmhqh                   kube-system
	444ef3af30aeb       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   b1dce3800ca5f       csi-hostpath-resizer-0                      kube-system
	0de41296f0ad8       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               3 minutes ago            Running             cloud-spanner-emulator                   0                   e912b533b760e       cloud-spanner-emulator-86bd5cbb97-ddvrf     default
	53ee1a72ac2ca       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             3 minutes ago            Exited              patch                                    1                   8abac4834b62a       ingress-nginx-admission-patch-gtc6l         ingress-nginx
	11d81ea66afbd       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   3 minutes ago            Exited              create                                   0                   dfa8291cdeb8c       ingress-nginx-admission-create-6btnm        ingress-nginx
	a89be2ad8c3cb       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   178d76584b528       kube-ingress-dns-minikube                   kube-system
	03254ae94d330       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   f941bbdcc5a20       snapshot-controller-7d9fbc56b8-n9z4k        kube-system
	380a55eebf3cd       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        3 minutes ago            Running             metrics-server                           0                   47326e5e80fde       metrics-server-85b7d694d7-2bwkc             kube-system
	dbc66dc27a615       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             4 minutes ago            Running             csi-attacher                             0                   fde2a1df08991       csi-hostpath-attacher-0                     kube-system
	561fd8a760135       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   1e96f9f9d3ba9       coredns-66bc5c9577-bzfbh                    kube-system
	bc4be5a012bc9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   cbc4dc0907694       storage-provisioner                         kube-system
	fb05a0521754d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             4 minutes ago            Running             kube-proxy                               0                   1883c7a31064a       kube-proxy-lfn78                            kube-system
	bdb041cabd34f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   782f17f840015       kindnet-v4rb6                               kube-system
	349c9103101d7       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             4 minutes ago            Running             kube-controller-manager                  0                   13a6662a15be1       kube-controller-manager-addons-757691       kube-system
	6fb3b53c30069       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             4 minutes ago            Running             kube-scheduler                           0                   ea6b0219bdeeb       kube-scheduler-addons-757691                kube-system
	df417919fab6f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             4 minutes ago            Running             kube-apiserver                           0                   9a36fb41a142f       kube-apiserver-addons-757691                kube-system
	2a94afd232256       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             4 minutes ago            Running             etcd                                     0                   55efdf76794cb       etcd-addons-757691                          kube-system
	
	
	==> coredns [561fd8a7601359c5c1ac06320b6c023314bf2d9c888338eb6db0cb74cf760ad6] <==
	[INFO] 10.244.0.15:34000 - 26741 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002406851s
	[INFO] 10.244.0.15:34000 - 29688 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000139431s
	[INFO] 10.244.0.15:34000 - 21376 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000147013s
	[INFO] 10.244.0.15:44602 - 2698 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000244318s
	[INFO] 10.244.0.15:44602 - 2936 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000171735s
	[INFO] 10.244.0.15:42075 - 33231 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000104189s
	[INFO] 10.244.0.15:42075 - 33436 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000100194s
	[INFO] 10.244.0.15:50953 - 48969 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000237803s
	[INFO] 10.244.0.15:50953 - 48780 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000255798s
	[INFO] 10.244.0.15:55147 - 23590 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001922686s
	[INFO] 10.244.0.15:55147 - 24035 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001169866s
	[INFO] 10.244.0.15:50947 - 9710 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000134434s
	[INFO] 10.244.0.15:50947 - 9314 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000080033s
	[INFO] 10.244.0.21:37427 - 23942 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000189466s
	[INFO] 10.244.0.21:42923 - 1160 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000105305s
	[INFO] 10.244.0.21:60789 - 38809 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000167838s
	[INFO] 10.244.0.21:55041 - 23874 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000098938s
	[INFO] 10.244.0.21:38291 - 65129 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000121191s
	[INFO] 10.244.0.21:39590 - 44302 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000124113s
	[INFO] 10.244.0.21:59908 - 12824 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00634966s
	[INFO] 10.244.0.21:44784 - 62183 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003179035s
	[INFO] 10.244.0.21:46494 - 39593 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002051771s
	[INFO] 10.244.0.21:55384 - 52854 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002161048s
	[INFO] 10.244.0.23:46293 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00018858s
	[INFO] 10.244.0.23:55622 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000094836s
	
	
	==> describe nodes <==
	Name:               addons-757691
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-757691
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=addons-757691
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T08_21_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-757691
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-757691"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 08:21:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-757691
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 08:26:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 08:24:56 +0000   Wed, 29 Oct 2025 08:21:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 08:24:56 +0000   Wed, 29 Oct 2025 08:21:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 08:24:56 +0000   Wed, 29 Oct 2025 08:21:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 08:24:56 +0000   Wed, 29 Oct 2025 08:22:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-757691
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                b8735395-3669-4c20-84a8-3e15bb7194b2
	  Boot ID:                    dcb5a4aa-84b0-4edf-aeb7-a96ccf6c0882
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	  default                     cloud-spanner-emulator-86bd5cbb97-ddvrf      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  default                     hello-world-app-5d498dc89-scv4b              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  gadget                      gadget-lfsrs                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  gcp-auth                    gcp-auth-78565c9fb4-7c65l                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-8xwgl    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m40s
	  kube-system                 coredns-66bc5c9577-bzfbh                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m46s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 csi-hostpathplugin-gzlfm                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 etcd-addons-757691                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4m52s
	  kube-system                 kindnet-v4rb6                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m47s
	  kube-system                 kube-apiserver-addons-757691                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 kube-controller-manager-addons-757691        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 kube-proxy-lfn78                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 kube-scheduler-addons-757691                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 metrics-server-85b7d694d7-2bwkc              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m41s
	  kube-system                 nvidia-device-plugin-daemonset-k472l         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 registry-6b586f9694-rmhqh                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 registry-creds-764b6fb674-7wrll              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 registry-proxy-wsh7n                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 snapshot-controller-7d9fbc56b8-46nzh         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 snapshot-controller-7d9fbc56b8-n9z4k         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  local-path-storage          local-path-provisioner-648f6765c9-t42xh      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-z7rr6               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 4m45s  kube-proxy       
	  Normal   Starting                 4m52s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m52s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m52s  kubelet          Node addons-757691 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m52s  kubelet          Node addons-757691 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m52s  kubelet          Node addons-757691 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m48s  node-controller  Node addons-757691 event: Registered Node addons-757691 in Controller
	  Normal   NodeReady                4m6s   kubelet          Node addons-757691 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct29 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014848] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.520802] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035216] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.815569] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.730396] kauditd_printk_skb: 36 callbacks suppressed
	[Oct29 08:19] kauditd_printk_skb: 8 callbacks suppressed
	[Oct29 08:21] overlayfs: idmapped layers are currently not supported
	[  +0.080642] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [2a94afd232256c9970e37e3077aaf55baec83c1b05f44ac0cb94c7d529e48160] <==
	{"level":"warn","ts":"2025-10-29T08:21:18.493552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.512935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.524726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.545119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.561026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.580254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.606238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.617266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.627889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.649031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.671740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.681612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.705173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.720898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.739164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.765265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.780704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.801030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.892539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:34.955015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:34.973242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:56.921150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:56.942886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:56.981115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:56.996136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36044","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [69ba1f444f956f1dcc0f5189a80713046222b26a14a91565d52356faf252a0c1] <==
	2025/10/29 08:23:04 GCP Auth Webhook started!
	2025/10/29 08:23:13 Ready to marshal response ...
	2025/10/29 08:23:13 Ready to write response ...
	2025/10/29 08:23:13 Ready to marshal response ...
	2025/10/29 08:23:13 Ready to write response ...
	2025/10/29 08:23:13 Ready to marshal response ...
	2025/10/29 08:23:13 Ready to write response ...
	2025/10/29 08:23:34 Ready to marshal response ...
	2025/10/29 08:23:34 Ready to write response ...
	2025/10/29 08:23:41 Ready to marshal response ...
	2025/10/29 08:23:41 Ready to write response ...
	2025/10/29 08:23:51 Ready to marshal response ...
	2025/10/29 08:23:51 Ready to write response ...
	2025/10/29 08:23:59 Ready to marshal response ...
	2025/10/29 08:23:59 Ready to write response ...
	2025/10/29 08:24:20 Ready to marshal response ...
	2025/10/29 08:24:20 Ready to write response ...
	2025/10/29 08:24:20 Ready to marshal response ...
	2025/10/29 08:24:20 Ready to write response ...
	2025/10/29 08:24:27 Ready to marshal response ...
	2025/10/29 08:24:27 Ready to write response ...
	2025/10/29 08:26:12 Ready to marshal response ...
	2025/10/29 08:26:12 Ready to write response ...
	
	
	==> kernel <==
	 08:26:14 up 8 min,  0 user,  load average: 0.53, 1.02, 0.58
	Linux addons-757691 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bdb041cabd34f35415d6aa99e1925090bda9745d10bfd7e1e4a7ce721cfb04de] <==
	I1029 08:24:08.460006       1 main.go:301] handling current node
	I1029 08:24:18.459164       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:24:18.459196       1 main.go:301] handling current node
	I1029 08:24:28.459205       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:24:28.459239       1 main.go:301] handling current node
	I1029 08:24:38.459068       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:24:38.459203       1 main.go:301] handling current node
	I1029 08:24:48.459807       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:24:48.459851       1 main.go:301] handling current node
	I1029 08:24:58.460394       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:24:58.460499       1 main.go:301] handling current node
	I1029 08:25:08.459815       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:25:08.459857       1 main.go:301] handling current node
	I1029 08:25:18.459276       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:25:18.459308       1 main.go:301] handling current node
	I1029 08:25:28.459856       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:25:28.459945       1 main.go:301] handling current node
	I1029 08:25:38.459289       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:25:38.459325       1 main.go:301] handling current node
	I1029 08:25:48.459699       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:25:48.459735       1 main.go:301] handling current node
	I1029 08:25:58.460059       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:25:58.460213       1 main.go:301] handling current node
	I1029 08:26:08.459756       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:26:08.459874       1 main.go:301] handling current node
	
	
	==> kube-apiserver [df417919fab6fd07c060b65a32c9220edeee697791536b0fa3a6e2baada5b377] <==
	W1029 08:21:56.981127       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1029 08:21:56.995918       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1029 08:22:09.060137       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.17.24:443: connect: connection refused
	E1029 08:22:09.060273       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.17.24:443: connect: connection refused" logger="UnhandledError"
	W1029 08:22:09.092563       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.17.24:443: connect: connection refused
	E1029 08:22:09.092678       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.17.24:443: connect: connection refused" logger="UnhandledError"
	W1029 08:22:09.177139       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.17.24:443: connect: connection refused
	E1029 08:22:09.177390       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.17.24:443: connect: connection refused" logger="UnhandledError"
	E1029 08:22:27.137053       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.141.158:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.141.158:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.141.158:443: connect: connection refused" logger="UnhandledError"
	W1029 08:22:27.142617       1 handler_proxy.go:99] no RequestInfo found in the context
	E1029 08:22:27.142685       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1029 08:22:27.143531       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.141.158:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.141.158:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.141.158:443: connect: connection refused" logger="UnhandledError"
	E1029 08:22:27.149485       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.141.158:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.141.158:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.141.158:443: connect: connection refused" logger="UnhandledError"
	E1029 08:22:27.161003       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.141.158:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.141.158:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.141.158:443: connect: connection refused" logger="UnhandledError"
	I1029 08:22:27.278367       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1029 08:23:23.577173       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:40844: use of closed network connection
	E1029 08:23:23.948477       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:40892: use of closed network connection
	I1029 08:23:51.111156       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1029 08:23:51.410003       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.206.199"}
	I1029 08:23:53.999104       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1029 08:24:06.709090       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1029 08:26:12.290043       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.159.174"}
	
	
	==> kube-controller-manager [349c9103101d7725e278ac33a2d7d761e55f35837d834c1cec2dbbfe3add8d47] <==
	I1029 08:21:26.945995       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1029 08:21:26.946062       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-757691"
	I1029 08:21:26.946103       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1029 08:21:26.946974       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1029 08:21:26.947047       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1029 08:21:26.948025       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1029 08:21:26.948073       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1029 08:21:26.948097       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1029 08:21:26.948276       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1029 08:21:26.948566       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1029 08:21:26.948750       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1029 08:21:26.949997       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1029 08:21:26.950070       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1029 08:21:26.963109       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 08:21:26.969230       1 shared_informer.go:356] "Caches are synced" controller="service account"
	E1029 08:21:56.906934       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1029 08:21:56.907091       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1029 08:21:56.907154       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1029 08:21:56.954284       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1029 08:21:56.965662       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1029 08:21:57.007857       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 08:21:57.072231       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 08:22:11.954238       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1029 08:22:27.014609       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1029 08:22:27.150713       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [fb05a0521754d6e3abce78732cce5547c6dfcfddd236c0d82161786ca543e41b] <==
	I1029 08:21:28.316489       1 server_linux.go:53] "Using iptables proxy"
	I1029 08:21:28.533835       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 08:21:28.634214       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 08:21:28.634258       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1029 08:21:28.634324       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 08:21:28.861207       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 08:21:28.861339       1 server_linux.go:132] "Using iptables Proxier"
	I1029 08:21:28.869880       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 08:21:28.890231       1 server.go:527] "Version info" version="v1.34.1"
	I1029 08:21:28.890260       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 08:21:28.894739       1 config.go:200] "Starting service config controller"
	I1029 08:21:28.894754       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 08:21:28.894776       1 config.go:106] "Starting endpoint slice config controller"
	I1029 08:21:28.894780       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 08:21:28.894804       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 08:21:28.894809       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 08:21:28.926331       1 config.go:309] "Starting node config controller"
	I1029 08:21:28.926359       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 08:21:28.926368       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 08:21:28.996483       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 08:21:28.996526       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 08:21:28.996578       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6fb3b53c30069d80f0ce7ee16f7eedad1c380d15ce86f571d6bbe59e3f920970] <==
	I1029 08:21:20.043946       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1029 08:21:20.048684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1029 08:21:20.048888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1029 08:21:20.048963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1029 08:21:20.049035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1029 08:21:20.053130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1029 08:21:20.053390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1029 08:21:20.053493       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1029 08:21:20.053585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1029 08:21:20.053678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1029 08:21:20.053800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1029 08:21:20.053892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1029 08:21:20.053978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1029 08:21:20.054064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1029 08:21:20.054151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1029 08:21:20.054236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1029 08:21:20.054340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1029 08:21:20.054429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1029 08:21:20.054571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1029 08:21:20.054719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1029 08:21:21.074110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1029 08:21:21.078729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1029 08:21:21.081976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1029 08:21:21.139830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1029 08:21:22.743792       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 08:25:22 addons-757691 kubelet[1271]: I1029 08:25:22.661295    1271 scope.go:117] "RemoveContainer" containerID="5849e7bdfaecd0e2ff95c936fb174560f4576d837190f2b4d1af7b45520e83bc"
	Oct 29 08:25:22 addons-757691 kubelet[1271]: I1029 08:25:22.680859    1271 scope.go:117] "RemoveContainer" containerID="db67124a23d6e361494e402e2ba7bc5dfc67ca1b0dc346abc92118ee42d0cd82"
	Oct 29 08:25:22 addons-757691 kubelet[1271]: E1029 08:25:22.702885    1271 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/6f61eecd50834a16b33d2b79353a4293d1dd04daa0b34b8946d5501e0bb520bd/diff" to get inode usage: stat /var/lib/containers/storage/overlay/6f61eecd50834a16b33d2b79353a4293d1dd04daa0b34b8946d5501e0bb520bd/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/local-path-storage_helper-pod-create-pvc-e1dc20ec-fec2-44cc-ac2b-af307dd1a9cc_030452b9-d62e-4e30-bd69-f17049150eb9/helper-pod/0.log" to get inode usage: stat /var/log/pods/local-path-storage_helper-pod-create-pvc-e1dc20ec-fec2-44cc-ac2b-af307dd1a9cc_030452b9-d62e-4e30-bd69-f17049150eb9/helper-pod/0.log: no such file or directory
	Oct 29 08:25:22 addons-757691 kubelet[1271]: E1029 08:25:22.716059    1271 manager.go:1116] Failed to create existing container: /docker/bf6f603e4d4f443578279c81f1a6dab5536260b406a0927d33375716db0cda33/crio/crio-db67124a23d6e361494e402e2ba7bc5dfc67ca1b0dc346abc92118ee42d0cd82: Error finding container db67124a23d6e361494e402e2ba7bc5dfc67ca1b0dc346abc92118ee42d0cd82: Status 404 returned error can't find the container with id db67124a23d6e361494e402e2ba7bc5dfc67ca1b0dc346abc92118ee42d0cd82
	Oct 29 08:25:22 addons-757691 kubelet[1271]: E1029 08:25:22.716528    1271 manager.go:1116] Failed to create existing container: /crio-c3207dd884e870501ac4365bd2e7a7413b3a4cdc5ed3e429117eced8603fa67c: Error finding container c3207dd884e870501ac4365bd2e7a7413b3a4cdc5ed3e429117eced8603fa67c: Status 404 returned error can't find the container with id c3207dd884e870501ac4365bd2e7a7413b3a4cdc5ed3e429117eced8603fa67c
	Oct 29 08:25:23 addons-757691 kubelet[1271]: I1029 08:25:23.529557    1271 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-k472l" secret="" err="secret \"gcp-auth\" not found"
	Oct 29 08:25:34 addons-757691 kubelet[1271]: I1029 08:25:34.531807    1271 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-7wrll" secret="" err="secret \"gcp-auth\" not found"
	Oct 29 08:25:34 addons-757691 kubelet[1271]: I1029 08:25:34.531867    1271 scope.go:117] "RemoveContainer" containerID="624c007a074f908e33e4515651cab537992dc70ced86b0d2d3be7017570f00f4"
	Oct 29 08:25:34 addons-757691 kubelet[1271]: E1029 08:25:34.532006    1271 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 40s restarting failed container=registry-creds pod=registry-creds-764b6fb674-7wrll_kube-system(dc216c07-bc5f-4a39-a59b-999712532cfd)\"" pod="kube-system/registry-creds-764b6fb674-7wrll" podUID="dc216c07-bc5f-4a39-a59b-999712532cfd"
	Oct 29 08:25:44 addons-757691 kubelet[1271]: I1029 08:25:44.530161    1271 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-rmhqh" secret="" err="secret \"gcp-auth\" not found"
	Oct 29 08:25:45 addons-757691 kubelet[1271]: I1029 08:25:45.529756    1271 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-7wrll" secret="" err="secret \"gcp-auth\" not found"
	Oct 29 08:25:45 addons-757691 kubelet[1271]: I1029 08:25:45.529833    1271 scope.go:117] "RemoveContainer" containerID="624c007a074f908e33e4515651cab537992dc70ced86b0d2d3be7017570f00f4"
	Oct 29 08:25:45 addons-757691 kubelet[1271]: E1029 08:25:45.530166    1271 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 40s restarting failed container=registry-creds pod=registry-creds-764b6fb674-7wrll_kube-system(dc216c07-bc5f-4a39-a59b-999712532cfd)\"" pod="kube-system/registry-creds-764b6fb674-7wrll" podUID="dc216c07-bc5f-4a39-a59b-999712532cfd"
	Oct 29 08:26:00 addons-757691 kubelet[1271]: I1029 08:26:00.532157    1271 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-7wrll" secret="" err="secret \"gcp-auth\" not found"
	Oct 29 08:26:00 addons-757691 kubelet[1271]: I1029 08:26:00.532222    1271 scope.go:117] "RemoveContainer" containerID="624c007a074f908e33e4515651cab537992dc70ced86b0d2d3be7017570f00f4"
	Oct 29 08:26:01 addons-757691 kubelet[1271]: I1029 08:26:01.175586    1271 scope.go:117] "RemoveContainer" containerID="624c007a074f908e33e4515651cab537992dc70ced86b0d2d3be7017570f00f4"
	Oct 29 08:26:01 addons-757691 kubelet[1271]: I1029 08:26:01.175865    1271 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-7wrll" secret="" err="secret \"gcp-auth\" not found"
	Oct 29 08:26:01 addons-757691 kubelet[1271]: I1029 08:26:01.175921    1271 scope.go:117] "RemoveContainer" containerID="35965c8fed7432340b0ee1023d7f701edea63a41a7a0c59fd67cd7c8465330d7"
	Oct 29 08:26:01 addons-757691 kubelet[1271]: E1029 08:26:01.176144    1271 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-7wrll_kube-system(dc216c07-bc5f-4a39-a59b-999712532cfd)\"" pod="kube-system/registry-creds-764b6fb674-7wrll" podUID="dc216c07-bc5f-4a39-a59b-999712532cfd"
	Oct 29 08:26:12 addons-757691 kubelet[1271]: I1029 08:26:12.232671    1271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/c595e89a-c4ce-4e32-b337-188a50796b2a-gcp-creds\") pod \"hello-world-app-5d498dc89-scv4b\" (UID: \"c595e89a-c4ce-4e32-b337-188a50796b2a\") " pod="default/hello-world-app-5d498dc89-scv4b"
	Oct 29 08:26:12 addons-757691 kubelet[1271]: I1029 08:26:12.233207    1271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf7tf\" (UniqueName: \"kubernetes.io/projected/c595e89a-c4ce-4e32-b337-188a50796b2a-kube-api-access-cf7tf\") pod \"hello-world-app-5d498dc89-scv4b\" (UID: \"c595e89a-c4ce-4e32-b337-188a50796b2a\") " pod="default/hello-world-app-5d498dc89-scv4b"
	Oct 29 08:26:13 addons-757691 kubelet[1271]: I1029 08:26:13.529280    1271 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-7wrll" secret="" err="secret \"gcp-auth\" not found"
	Oct 29 08:26:13 addons-757691 kubelet[1271]: I1029 08:26:13.529846    1271 scope.go:117] "RemoveContainer" containerID="35965c8fed7432340b0ee1023d7f701edea63a41a7a0c59fd67cd7c8465330d7"
	Oct 29 08:26:13 addons-757691 kubelet[1271]: E1029 08:26:13.530325    1271 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-7wrll_kube-system(dc216c07-bc5f-4a39-a59b-999712532cfd)\"" pod="kube-system/registry-creds-764b6fb674-7wrll" podUID="dc216c07-bc5f-4a39-a59b-999712532cfd"
	Oct 29 08:26:13 addons-757691 kubelet[1271]: I1029 08:26:13.552721    1271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-scv4b" podStartSLOduration=0.891089358 podStartE2EDuration="1.552700269s" podCreationTimestamp="2025-10-29 08:26:12 +0000 UTC" firstStartedPulling="2025-10-29 08:26:12.480513661 +0000 UTC m=+290.056575351" lastFinishedPulling="2025-10-29 08:26:13.142124564 +0000 UTC m=+290.718186262" observedRunningTime="2025-10-29 08:26:13.253639473 +0000 UTC m=+290.829701187" watchObservedRunningTime="2025-10-29 08:26:13.552700269 +0000 UTC m=+291.128761959"
	
	
	==> storage-provisioner [bc4be5a012bc9f8e39fa97fa9dfd2e049f3d28d71ee13ad96c3db8f172403a78] <==
	W1029 08:25:49.328167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:25:51.331806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:25:51.336278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:25:53.339529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:25:53.346193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:25:55.349879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:25:55.354691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:25:57.360820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:25:57.369146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:25:59.372840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:25:59.379468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:01.383339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:01.391026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:03.402030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:03.412772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:05.416295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:05.420556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:07.423324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:07.430705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:09.435676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:09.447734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:11.451154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:11.455535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:13.467594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:13.473615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-757691 -n addons-757691
helpers_test.go:269: (dbg) Run:  kubectl --context addons-757691 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-6btnm ingress-nginx-admission-patch-gtc6l
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-757691 describe pod ingress-nginx-admission-create-6btnm ingress-nginx-admission-patch-gtc6l
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-757691 describe pod ingress-nginx-admission-create-6btnm ingress-nginx-admission-patch-gtc6l: exit status 1 (91.485338ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-6btnm" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-gtc6l" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-757691 describe pod ingress-nginx-admission-create-6btnm ingress-nginx-admission-patch-gtc6l: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-757691 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-757691 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (266.200537ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:26:15.796130   15022 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:26:15.796410   15022 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:26:15.796443   15022 out.go:374] Setting ErrFile to fd 2...
	I1029 08:26:15.796463   15022 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:26:15.797211   15022 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 08:26:15.797573   15022 mustload.go:66] Loading cluster: addons-757691
	I1029 08:26:15.797996   15022 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:26:15.798037   15022 addons.go:607] checking whether the cluster is paused
	I1029 08:26:15.798167   15022 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:26:15.798196   15022 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:26:15.798650   15022 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:26:15.815995   15022 ssh_runner.go:195] Run: systemctl --version
	I1029 08:26:15.816063   15022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:26:15.838103   15022 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:26:15.942844   15022 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:26:15.942940   15022 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:26:15.974401   15022 cri.go:89] found id: "35965c8fed7432340b0ee1023d7f701edea63a41a7a0c59fd67cd7c8465330d7"
	I1029 08:26:15.974426   15022 cri.go:89] found id: "ee8944794e8050551c59ad29f1e3e516d055471261079ddb98ad1b18d85f8d62"
	I1029 08:26:15.974433   15022 cri.go:89] found id: "32f7a28d2d03b12a04f38527066ab5cdace38391dbd7e81a25de50ac95ea189d"
	I1029 08:26:15.974437   15022 cri.go:89] found id: "239ec534461a096cf94705920f445c2256dd88aaa699d21479b90194a3837f9b"
	I1029 08:26:15.974440   15022 cri.go:89] found id: "0555333eb38f561643aa85f1253ffad88ad99d3734392074f633148511ce3081"
	I1029 08:26:15.974444   15022 cri.go:89] found id: "b7ebb9338f4b71874206cc6aa8143d99e673a9cca1b219506840b748ac705b60"
	I1029 08:26:15.974452   15022 cri.go:89] found id: "861cd9d17d1a25a1554adc0ae16a417206ae256ce09efb8acbb8fbdfd34b1733"
	I1029 08:26:15.974455   15022 cri.go:89] found id: "4f38205b7fd4d543287d30e2654b8b18c64c68ac9936ecc6de021a7f18188c65"
	I1029 08:26:15.974459   15022 cri.go:89] found id: "080445adfb2737e11888db144d48240f8f457851f5dd235ba8ac2de2d56a6f02"
	I1029 08:26:15.974465   15022 cri.go:89] found id: "525382941facb4662c4472842cc827c30b969d0ba588b1fe4bd1ab1a8be43d02"
	I1029 08:26:15.974469   15022 cri.go:89] found id: "c8fe768126de326968797194f6739f6b4dffc8edd42a7e3da422ab55d6c46d31"
	I1029 08:26:15.974472   15022 cri.go:89] found id: "444ef3af30aeb87e6a1cef7fe02d50c1eeb0628ff4d53cf0d6d76407448af653"
	I1029 08:26:15.974475   15022 cri.go:89] found id: "a89be2ad8c3cbb179996675c4f579261e541010fed42ffff33e36d897e051d6f"
	I1029 08:26:15.974479   15022 cri.go:89] found id: "03254ae94d330d94320842bf836194b38de9aa234ed810020b44739f573b3a1f"
	I1029 08:26:15.974483   15022 cri.go:89] found id: "380a55eebf3cdcc226730df7d2181cf069c2ff5fa31ba1bd7f7ecbdbb1a00c53"
	I1029 08:26:15.974496   15022 cri.go:89] found id: "dbc66dc27a6154e247feb539a4148136556f003707e138999e20759485b59218"
	I1029 08:26:15.974501   15022 cri.go:89] found id: "561fd8a7601359c5c1ac06320b6c023314bf2d9c888338eb6db0cb74cf760ad6"
	I1029 08:26:15.974504   15022 cri.go:89] found id: "bc4be5a012bc9f8e39fa97fa9dfd2e049f3d28d71ee13ad96c3db8f172403a78"
	I1029 08:26:15.974507   15022 cri.go:89] found id: "fb05a0521754d6e3abce78732cce5547c6dfcfddd236c0d82161786ca543e41b"
	I1029 08:26:15.974510   15022 cri.go:89] found id: "bdb041cabd34f35415d6aa99e1925090bda9745d10bfd7e1e4a7ce721cfb04de"
	I1029 08:26:15.974516   15022 cri.go:89] found id: "349c9103101d7725e278ac33a2d7d761e55f35837d834c1cec2dbbfe3add8d47"
	I1029 08:26:15.974519   15022 cri.go:89] found id: "6fb3b53c30069d80f0ce7ee16f7eedad1c380d15ce86f571d6bbe59e3f920970"
	I1029 08:26:15.974522   15022 cri.go:89] found id: "df417919fab6fd07c060b65a32c9220edeee697791536b0fa3a6e2baada5b377"
	I1029 08:26:15.974526   15022 cri.go:89] found id: "2a94afd232256c9970e37e3077aaf55baec83c1b05f44ac0cb94c7d529e48160"
	I1029 08:26:15.974536   15022 cri.go:89] found id: ""
	I1029 08:26:15.974586   15022 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 08:26:15.990104   15022 out.go:203] 
	W1029 08:26:15.993032   15022 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:26:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:26:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 08:26:15.993058   15022 out.go:285] * 
	* 
	W1029 08:26:15.997440   15022 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:26:16.000649   15022 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-757691 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-757691 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-757691 addons disable ingress --alsologtostderr -v=1: exit status 11 (254.447457ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:26:16.061980   15067 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:26:16.062138   15067 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:26:16.062151   15067 out.go:374] Setting ErrFile to fd 2...
	I1029 08:26:16.062157   15067 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:26:16.062421   15067 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 08:26:16.062702   15067 mustload.go:66] Loading cluster: addons-757691
	I1029 08:26:16.063066   15067 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:26:16.063084   15067 addons.go:607] checking whether the cluster is paused
	I1029 08:26:16.063189   15067 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:26:16.063204   15067 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:26:16.063749   15067 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:26:16.081856   15067 ssh_runner.go:195] Run: systemctl --version
	I1029 08:26:16.081920   15067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:26:16.099385   15067 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:26:16.204092   15067 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:26:16.204186   15067 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:26:16.234731   15067 cri.go:89] found id: "35965c8fed7432340b0ee1023d7f701edea63a41a7a0c59fd67cd7c8465330d7"
	I1029 08:26:16.234799   15067 cri.go:89] found id: "ee8944794e8050551c59ad29f1e3e516d055471261079ddb98ad1b18d85f8d62"
	I1029 08:26:16.234812   15067 cri.go:89] found id: "32f7a28d2d03b12a04f38527066ab5cdace38391dbd7e81a25de50ac95ea189d"
	I1029 08:26:16.234817   15067 cri.go:89] found id: "239ec534461a096cf94705920f445c2256dd88aaa699d21479b90194a3837f9b"
	I1029 08:26:16.234821   15067 cri.go:89] found id: "0555333eb38f561643aa85f1253ffad88ad99d3734392074f633148511ce3081"
	I1029 08:26:16.234825   15067 cri.go:89] found id: "b7ebb9338f4b71874206cc6aa8143d99e673a9cca1b219506840b748ac705b60"
	I1029 08:26:16.234828   15067 cri.go:89] found id: "861cd9d17d1a25a1554adc0ae16a417206ae256ce09efb8acbb8fbdfd34b1733"
	I1029 08:26:16.234831   15067 cri.go:89] found id: "4f38205b7fd4d543287d30e2654b8b18c64c68ac9936ecc6de021a7f18188c65"
	I1029 08:26:16.234835   15067 cri.go:89] found id: "080445adfb2737e11888db144d48240f8f457851f5dd235ba8ac2de2d56a6f02"
	I1029 08:26:16.234841   15067 cri.go:89] found id: "525382941facb4662c4472842cc827c30b969d0ba588b1fe4bd1ab1a8be43d02"
	I1029 08:26:16.234867   15067 cri.go:89] found id: "c8fe768126de326968797194f6739f6b4dffc8edd42a7e3da422ab55d6c46d31"
	I1029 08:26:16.234889   15067 cri.go:89] found id: "444ef3af30aeb87e6a1cef7fe02d50c1eeb0628ff4d53cf0d6d76407448af653"
	I1029 08:26:16.234900   15067 cri.go:89] found id: "a89be2ad8c3cbb179996675c4f579261e541010fed42ffff33e36d897e051d6f"
	I1029 08:26:16.234904   15067 cri.go:89] found id: "03254ae94d330d94320842bf836194b38de9aa234ed810020b44739f573b3a1f"
	I1029 08:26:16.234907   15067 cri.go:89] found id: "380a55eebf3cdcc226730df7d2181cf069c2ff5fa31ba1bd7f7ecbdbb1a00c53"
	I1029 08:26:16.234913   15067 cri.go:89] found id: "dbc66dc27a6154e247feb539a4148136556f003707e138999e20759485b59218"
	I1029 08:26:16.234923   15067 cri.go:89] found id: "561fd8a7601359c5c1ac06320b6c023314bf2d9c888338eb6db0cb74cf760ad6"
	I1029 08:26:16.234927   15067 cri.go:89] found id: "bc4be5a012bc9f8e39fa97fa9dfd2e049f3d28d71ee13ad96c3db8f172403a78"
	I1029 08:26:16.234930   15067 cri.go:89] found id: "fb05a0521754d6e3abce78732cce5547c6dfcfddd236c0d82161786ca543e41b"
	I1029 08:26:16.234933   15067 cri.go:89] found id: "bdb041cabd34f35415d6aa99e1925090bda9745d10bfd7e1e4a7ce721cfb04de"
	I1029 08:26:16.234939   15067 cri.go:89] found id: "349c9103101d7725e278ac33a2d7d761e55f35837d834c1cec2dbbfe3add8d47"
	I1029 08:26:16.234942   15067 cri.go:89] found id: "6fb3b53c30069d80f0ce7ee16f7eedad1c380d15ce86f571d6bbe59e3f920970"
	I1029 08:26:16.234945   15067 cri.go:89] found id: "df417919fab6fd07c060b65a32c9220edeee697791536b0fa3a6e2baada5b377"
	I1029 08:26:16.234948   15067 cri.go:89] found id: "2a94afd232256c9970e37e3077aaf55baec83c1b05f44ac0cb94c7d529e48160"
	I1029 08:26:16.234951   15067 cri.go:89] found id: ""
	I1029 08:26:16.235000   15067 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 08:26:16.250630   15067 out.go:203] 
	W1029 08:26:16.253563   15067 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:26:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:26:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 08:26:16.253589   15067 out.go:285] * 
	* 
	W1029 08:26:16.257953   15067 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:26:16.260846   15067 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-757691 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (145.46s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-lfsrs" [e6586fb1-1183-42d0-9c22-7b8bc3339799] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003405894s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-757691 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-757691 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (256.561476ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:23:50.595586   12568 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:23:50.595734   12568 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:23:50.595740   12568 out.go:374] Setting ErrFile to fd 2...
	I1029 08:23:50.595744   12568 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:23:50.595992   12568 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 08:23:50.596265   12568 mustload.go:66] Loading cluster: addons-757691
	I1029 08:23:50.596696   12568 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:23:50.596719   12568 addons.go:607] checking whether the cluster is paused
	I1029 08:23:50.596835   12568 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:23:50.596851   12568 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:23:50.597280   12568 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:23:50.621874   12568 ssh_runner.go:195] Run: systemctl --version
	I1029 08:23:50.621929   12568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:23:50.639356   12568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:23:50.746899   12568 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:23:50.746999   12568 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:23:50.775774   12568 cri.go:89] found id: "ee8944794e8050551c59ad29f1e3e516d055471261079ddb98ad1b18d85f8d62"
	I1029 08:23:50.775792   12568 cri.go:89] found id: "32f7a28d2d03b12a04f38527066ab5cdace38391dbd7e81a25de50ac95ea189d"
	I1029 08:23:50.775797   12568 cri.go:89] found id: "239ec534461a096cf94705920f445c2256dd88aaa699d21479b90194a3837f9b"
	I1029 08:23:50.775800   12568 cri.go:89] found id: "0555333eb38f561643aa85f1253ffad88ad99d3734392074f633148511ce3081"
	I1029 08:23:50.775804   12568 cri.go:89] found id: "b7ebb9338f4b71874206cc6aa8143d99e673a9cca1b219506840b748ac705b60"
	I1029 08:23:50.775808   12568 cri.go:89] found id: "861cd9d17d1a25a1554adc0ae16a417206ae256ce09efb8acbb8fbdfd34b1733"
	I1029 08:23:50.775811   12568 cri.go:89] found id: "4f38205b7fd4d543287d30e2654b8b18c64c68ac9936ecc6de021a7f18188c65"
	I1029 08:23:50.775814   12568 cri.go:89] found id: "080445adfb2737e11888db144d48240f8f457851f5dd235ba8ac2de2d56a6f02"
	I1029 08:23:50.775817   12568 cri.go:89] found id: "525382941facb4662c4472842cc827c30b969d0ba588b1fe4bd1ab1a8be43d02"
	I1029 08:23:50.775823   12568 cri.go:89] found id: "c8fe768126de326968797194f6739f6b4dffc8edd42a7e3da422ab55d6c46d31"
	I1029 08:23:50.775827   12568 cri.go:89] found id: "444ef3af30aeb87e6a1cef7fe02d50c1eeb0628ff4d53cf0d6d76407448af653"
	I1029 08:23:50.775830   12568 cri.go:89] found id: "a89be2ad8c3cbb179996675c4f579261e541010fed42ffff33e36d897e051d6f"
	I1029 08:23:50.775833   12568 cri.go:89] found id: "03254ae94d330d94320842bf836194b38de9aa234ed810020b44739f573b3a1f"
	I1029 08:23:50.775836   12568 cri.go:89] found id: "380a55eebf3cdcc226730df7d2181cf069c2ff5fa31ba1bd7f7ecbdbb1a00c53"
	I1029 08:23:50.775839   12568 cri.go:89] found id: "dbc66dc27a6154e247feb539a4148136556f003707e138999e20759485b59218"
	I1029 08:23:50.775846   12568 cri.go:89] found id: "561fd8a7601359c5c1ac06320b6c023314bf2d9c888338eb6db0cb74cf760ad6"
	I1029 08:23:50.775853   12568 cri.go:89] found id: "bc4be5a012bc9f8e39fa97fa9dfd2e049f3d28d71ee13ad96c3db8f172403a78"
	I1029 08:23:50.775857   12568 cri.go:89] found id: "fb05a0521754d6e3abce78732cce5547c6dfcfddd236c0d82161786ca543e41b"
	I1029 08:23:50.775861   12568 cri.go:89] found id: "bdb041cabd34f35415d6aa99e1925090bda9745d10bfd7e1e4a7ce721cfb04de"
	I1029 08:23:50.775864   12568 cri.go:89] found id: "349c9103101d7725e278ac33a2d7d761e55f35837d834c1cec2dbbfe3add8d47"
	I1029 08:23:50.775868   12568 cri.go:89] found id: "6fb3b53c30069d80f0ce7ee16f7eedad1c380d15ce86f571d6bbe59e3f920970"
	I1029 08:23:50.775871   12568 cri.go:89] found id: "df417919fab6fd07c060b65a32c9220edeee697791536b0fa3a6e2baada5b377"
	I1029 08:23:50.775874   12568 cri.go:89] found id: "2a94afd232256c9970e37e3077aaf55baec83c1b05f44ac0cb94c7d529e48160"
	I1029 08:23:50.775877   12568 cri.go:89] found id: ""
	I1029 08:23:50.775924   12568 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 08:23:50.791606   12568 out.go:203] 
	W1029 08:23:50.795490   12568 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:23:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:23:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 08:23:50.795521   12568 out.go:285] * 
	* 
	W1029 08:23:50.799949   12568 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:23:50.803020   12568 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-757691 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.45s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 4.223838ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-2bwkc" [23336639-b3d3-4d15-a905-a3fcfe642ab9] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00358206s
addons_test.go:463: (dbg) Run:  kubectl --context addons-757691 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-757691 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-757691 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (331.329249ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:23:44.275857   12406 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:23:44.276000   12406 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:23:44.276006   12406 out.go:374] Setting ErrFile to fd 2...
	I1029 08:23:44.276016   12406 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:23:44.276414   12406 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 08:23:44.276747   12406 mustload.go:66] Loading cluster: addons-757691
	I1029 08:23:44.277341   12406 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:23:44.277352   12406 addons.go:607] checking whether the cluster is paused
	I1029 08:23:44.277617   12406 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:23:44.277634   12406 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:23:44.278292   12406 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:23:44.313475   12406 ssh_runner.go:195] Run: systemctl --version
	I1029 08:23:44.313543   12406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:23:44.346251   12406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:23:44.459152   12406 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:23:44.459232   12406 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:23:44.495176   12406 cri.go:89] found id: "ee8944794e8050551c59ad29f1e3e516d055471261079ddb98ad1b18d85f8d62"
	I1029 08:23:44.495200   12406 cri.go:89] found id: "32f7a28d2d03b12a04f38527066ab5cdace38391dbd7e81a25de50ac95ea189d"
	I1029 08:23:44.495205   12406 cri.go:89] found id: "239ec534461a096cf94705920f445c2256dd88aaa699d21479b90194a3837f9b"
	I1029 08:23:44.495209   12406 cri.go:89] found id: "0555333eb38f561643aa85f1253ffad88ad99d3734392074f633148511ce3081"
	I1029 08:23:44.495212   12406 cri.go:89] found id: "b7ebb9338f4b71874206cc6aa8143d99e673a9cca1b219506840b748ac705b60"
	I1029 08:23:44.495216   12406 cri.go:89] found id: "861cd9d17d1a25a1554adc0ae16a417206ae256ce09efb8acbb8fbdfd34b1733"
	I1029 08:23:44.495219   12406 cri.go:89] found id: "4f38205b7fd4d543287d30e2654b8b18c64c68ac9936ecc6de021a7f18188c65"
	I1029 08:23:44.495222   12406 cri.go:89] found id: "080445adfb2737e11888db144d48240f8f457851f5dd235ba8ac2de2d56a6f02"
	I1029 08:23:44.495225   12406 cri.go:89] found id: "525382941facb4662c4472842cc827c30b969d0ba588b1fe4bd1ab1a8be43d02"
	I1029 08:23:44.495233   12406 cri.go:89] found id: "c8fe768126de326968797194f6739f6b4dffc8edd42a7e3da422ab55d6c46d31"
	I1029 08:23:44.495236   12406 cri.go:89] found id: "444ef3af30aeb87e6a1cef7fe02d50c1eeb0628ff4d53cf0d6d76407448af653"
	I1029 08:23:44.495239   12406 cri.go:89] found id: "a89be2ad8c3cbb179996675c4f579261e541010fed42ffff33e36d897e051d6f"
	I1029 08:23:44.495243   12406 cri.go:89] found id: "03254ae94d330d94320842bf836194b38de9aa234ed810020b44739f573b3a1f"
	I1029 08:23:44.495247   12406 cri.go:89] found id: "380a55eebf3cdcc226730df7d2181cf069c2ff5fa31ba1bd7f7ecbdbb1a00c53"
	I1029 08:23:44.495251   12406 cri.go:89] found id: "dbc66dc27a6154e247feb539a4148136556f003707e138999e20759485b59218"
	I1029 08:23:44.495256   12406 cri.go:89] found id: "561fd8a7601359c5c1ac06320b6c023314bf2d9c888338eb6db0cb74cf760ad6"
	I1029 08:23:44.495261   12406 cri.go:89] found id: "bc4be5a012bc9f8e39fa97fa9dfd2e049f3d28d71ee13ad96c3db8f172403a78"
	I1029 08:23:44.495266   12406 cri.go:89] found id: "fb05a0521754d6e3abce78732cce5547c6dfcfddd236c0d82161786ca543e41b"
	I1029 08:23:44.495269   12406 cri.go:89] found id: "bdb041cabd34f35415d6aa99e1925090bda9745d10bfd7e1e4a7ce721cfb04de"
	I1029 08:23:44.495272   12406 cri.go:89] found id: "349c9103101d7725e278ac33a2d7d761e55f35837d834c1cec2dbbfe3add8d47"
	I1029 08:23:44.495277   12406 cri.go:89] found id: "6fb3b53c30069d80f0ce7ee16f7eedad1c380d15ce86f571d6bbe59e3f920970"
	I1029 08:23:44.495280   12406 cri.go:89] found id: "df417919fab6fd07c060b65a32c9220edeee697791536b0fa3a6e2baada5b377"
	I1029 08:23:44.495284   12406 cri.go:89] found id: "2a94afd232256c9970e37e3077aaf55baec83c1b05f44ac0cb94c7d529e48160"
	I1029 08:23:44.495287   12406 cri.go:89] found id: ""
	I1029 08:23:44.495339   12406 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 08:23:44.525403   12406 out.go:203] 
	W1029 08:23:44.531391   12406 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:23:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:23:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 08:23:44.531420   12406 out.go:285] * 
	* 
	W1029 08:23:44.536786   12406 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:23:44.541082   12406 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-757691 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.45s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1029 08:23:27.474049    4550 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1029 08:23:27.486811    4550 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1029 08:23:27.486841    4550 kapi.go:107] duration metric: took 12.805143ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 12.817844ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-757691 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757691 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757691 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757691 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757691 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757691 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757691 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757691 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757691 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757691 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757691 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757691 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757691 get pvc hpvc -o jsonpath={.status.phase} -n default
2025/10/29 08:23:38 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757691 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757691 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757691 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-757691 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [b609cc36-fd85-4e5d-b224-3679d049e48a] Pending
helpers_test.go:352: "task-pv-pod" [b609cc36-fd85-4e5d-b224-3679d049e48a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [b609cc36-fd85-4e5d-b224-3679d049e48a] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003319901s
addons_test.go:572: (dbg) Run:  kubectl --context addons-757691 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-757691 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-757691 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-757691 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-757691 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-757691 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757691 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757691 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757691 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757691 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-757691 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [30354463-cb72-4987-848c-839308f333e9] Pending
helpers_test.go:352: "task-pv-pod-restore" [30354463-cb72-4987-848c-839308f333e9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [30354463-cb72-4987-848c-839308f333e9] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.00373503s
addons_test.go:614: (dbg) Run:  kubectl --context addons-757691 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-757691 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-757691 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-757691 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-757691 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (301.236638ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:24:07.150902   13236 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:24:07.151081   13236 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:24:07.151093   13236 out.go:374] Setting ErrFile to fd 2...
	I1029 08:24:07.151098   13236 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:24:07.151399   13236 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 08:24:07.151716   13236 mustload.go:66] Loading cluster: addons-757691
	I1029 08:24:07.152129   13236 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:24:07.152145   13236 addons.go:607] checking whether the cluster is paused
	I1029 08:24:07.152301   13236 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:24:07.152351   13236 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:24:07.152835   13236 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:24:07.184567   13236 ssh_runner.go:195] Run: systemctl --version
	I1029 08:24:07.184688   13236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:24:07.204548   13236 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:24:07.311074   13236 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:24:07.311176   13236 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:24:07.355148   13236 cri.go:89] found id: "ee8944794e8050551c59ad29f1e3e516d055471261079ddb98ad1b18d85f8d62"
	I1029 08:24:07.355172   13236 cri.go:89] found id: "32f7a28d2d03b12a04f38527066ab5cdace38391dbd7e81a25de50ac95ea189d"
	I1029 08:24:07.355177   13236 cri.go:89] found id: "239ec534461a096cf94705920f445c2256dd88aaa699d21479b90194a3837f9b"
	I1029 08:24:07.355181   13236 cri.go:89] found id: "0555333eb38f561643aa85f1253ffad88ad99d3734392074f633148511ce3081"
	I1029 08:24:07.355189   13236 cri.go:89] found id: "b7ebb9338f4b71874206cc6aa8143d99e673a9cca1b219506840b748ac705b60"
	I1029 08:24:07.355193   13236 cri.go:89] found id: "861cd9d17d1a25a1554adc0ae16a417206ae256ce09efb8acbb8fbdfd34b1733"
	I1029 08:24:07.355197   13236 cri.go:89] found id: "4f38205b7fd4d543287d30e2654b8b18c64c68ac9936ecc6de021a7f18188c65"
	I1029 08:24:07.355200   13236 cri.go:89] found id: "080445adfb2737e11888db144d48240f8f457851f5dd235ba8ac2de2d56a6f02"
	I1029 08:24:07.355203   13236 cri.go:89] found id: "525382941facb4662c4472842cc827c30b969d0ba588b1fe4bd1ab1a8be43d02"
	I1029 08:24:07.355209   13236 cri.go:89] found id: "c8fe768126de326968797194f6739f6b4dffc8edd42a7e3da422ab55d6c46d31"
	I1029 08:24:07.355213   13236 cri.go:89] found id: "444ef3af30aeb87e6a1cef7fe02d50c1eeb0628ff4d53cf0d6d76407448af653"
	I1029 08:24:07.355216   13236 cri.go:89] found id: "a89be2ad8c3cbb179996675c4f579261e541010fed42ffff33e36d897e051d6f"
	I1029 08:24:07.355219   13236 cri.go:89] found id: "03254ae94d330d94320842bf836194b38de9aa234ed810020b44739f573b3a1f"
	I1029 08:24:07.355222   13236 cri.go:89] found id: "380a55eebf3cdcc226730df7d2181cf069c2ff5fa31ba1bd7f7ecbdbb1a00c53"
	I1029 08:24:07.355225   13236 cri.go:89] found id: "dbc66dc27a6154e247feb539a4148136556f003707e138999e20759485b59218"
	I1029 08:24:07.355230   13236 cri.go:89] found id: "561fd8a7601359c5c1ac06320b6c023314bf2d9c888338eb6db0cb74cf760ad6"
	I1029 08:24:07.355237   13236 cri.go:89] found id: "bc4be5a012bc9f8e39fa97fa9dfd2e049f3d28d71ee13ad96c3db8f172403a78"
	I1029 08:24:07.355241   13236 cri.go:89] found id: "fb05a0521754d6e3abce78732cce5547c6dfcfddd236c0d82161786ca543e41b"
	I1029 08:24:07.355244   13236 cri.go:89] found id: "bdb041cabd34f35415d6aa99e1925090bda9745d10bfd7e1e4a7ce721cfb04de"
	I1029 08:24:07.355247   13236 cri.go:89] found id: "349c9103101d7725e278ac33a2d7d761e55f35837d834c1cec2dbbfe3add8d47"
	I1029 08:24:07.355251   13236 cri.go:89] found id: "6fb3b53c30069d80f0ce7ee16f7eedad1c380d15ce86f571d6bbe59e3f920970"
	I1029 08:24:07.355254   13236 cri.go:89] found id: "df417919fab6fd07c060b65a32c9220edeee697791536b0fa3a6e2baada5b377"
	I1029 08:24:07.355258   13236 cri.go:89] found id: "2a94afd232256c9970e37e3077aaf55baec83c1b05f44ac0cb94c7d529e48160"
	I1029 08:24:07.355273   13236 cri.go:89] found id: ""
	I1029 08:24:07.355355   13236 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 08:24:07.375442   13236 out.go:203] 
	W1029 08:24:07.380131   13236 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:24:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:24:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 08:24:07.380155   13236 out.go:285] * 
	* 
	W1029 08:24:07.384617   13236 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:24:07.391134   13236 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-757691 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-757691 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-757691 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (362.964859ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:24:07.532241   13279 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:24:07.532632   13279 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:24:07.532644   13279 out.go:374] Setting ErrFile to fd 2...
	I1029 08:24:07.532649   13279 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:24:07.532927   13279 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 08:24:07.533228   13279 mustload.go:66] Loading cluster: addons-757691
	I1029 08:24:07.533617   13279 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:24:07.533634   13279 addons.go:607] checking whether the cluster is paused
	I1029 08:24:07.533737   13279 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:24:07.533757   13279 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:24:07.534231   13279 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:24:07.554062   13279 ssh_runner.go:195] Run: systemctl --version
	I1029 08:24:07.554121   13279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:24:07.576744   13279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:24:07.695071   13279 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:24:07.695158   13279 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:24:07.725821   13279 cri.go:89] found id: "ee8944794e8050551c59ad29f1e3e516d055471261079ddb98ad1b18d85f8d62"
	I1029 08:24:07.725843   13279 cri.go:89] found id: "32f7a28d2d03b12a04f38527066ab5cdace38391dbd7e81a25de50ac95ea189d"
	I1029 08:24:07.725849   13279 cri.go:89] found id: "239ec534461a096cf94705920f445c2256dd88aaa699d21479b90194a3837f9b"
	I1029 08:24:07.725853   13279 cri.go:89] found id: "0555333eb38f561643aa85f1253ffad88ad99d3734392074f633148511ce3081"
	I1029 08:24:07.725856   13279 cri.go:89] found id: "b7ebb9338f4b71874206cc6aa8143d99e673a9cca1b219506840b748ac705b60"
	I1029 08:24:07.725860   13279 cri.go:89] found id: "861cd9d17d1a25a1554adc0ae16a417206ae256ce09efb8acbb8fbdfd34b1733"
	I1029 08:24:07.725863   13279 cri.go:89] found id: "4f38205b7fd4d543287d30e2654b8b18c64c68ac9936ecc6de021a7f18188c65"
	I1029 08:24:07.725867   13279 cri.go:89] found id: "080445adfb2737e11888db144d48240f8f457851f5dd235ba8ac2de2d56a6f02"
	I1029 08:24:07.725870   13279 cri.go:89] found id: "525382941facb4662c4472842cc827c30b969d0ba588b1fe4bd1ab1a8be43d02"
	I1029 08:24:07.725884   13279 cri.go:89] found id: "c8fe768126de326968797194f6739f6b4dffc8edd42a7e3da422ab55d6c46d31"
	I1029 08:24:07.725888   13279 cri.go:89] found id: "444ef3af30aeb87e6a1cef7fe02d50c1eeb0628ff4d53cf0d6d76407448af653"
	I1029 08:24:07.725892   13279 cri.go:89] found id: "a89be2ad8c3cbb179996675c4f579261e541010fed42ffff33e36d897e051d6f"
	I1029 08:24:07.725896   13279 cri.go:89] found id: "03254ae94d330d94320842bf836194b38de9aa234ed810020b44739f573b3a1f"
	I1029 08:24:07.725900   13279 cri.go:89] found id: "380a55eebf3cdcc226730df7d2181cf069c2ff5fa31ba1bd7f7ecbdbb1a00c53"
	I1029 08:24:07.725908   13279 cri.go:89] found id: "dbc66dc27a6154e247feb539a4148136556f003707e138999e20759485b59218"
	I1029 08:24:07.725915   13279 cri.go:89] found id: "561fd8a7601359c5c1ac06320b6c023314bf2d9c888338eb6db0cb74cf760ad6"
	I1029 08:24:07.725919   13279 cri.go:89] found id: "bc4be5a012bc9f8e39fa97fa9dfd2e049f3d28d71ee13ad96c3db8f172403a78"
	I1029 08:24:07.725923   13279 cri.go:89] found id: "fb05a0521754d6e3abce78732cce5547c6dfcfddd236c0d82161786ca543e41b"
	I1029 08:24:07.725926   13279 cri.go:89] found id: "bdb041cabd34f35415d6aa99e1925090bda9745d10bfd7e1e4a7ce721cfb04de"
	I1029 08:24:07.725929   13279 cri.go:89] found id: "349c9103101d7725e278ac33a2d7d761e55f35837d834c1cec2dbbfe3add8d47"
	I1029 08:24:07.725934   13279 cri.go:89] found id: "6fb3b53c30069d80f0ce7ee16f7eedad1c380d15ce86f571d6bbe59e3f920970"
	I1029 08:24:07.725941   13279 cri.go:89] found id: "df417919fab6fd07c060b65a32c9220edeee697791536b0fa3a6e2baada5b377"
	I1029 08:24:07.725944   13279 cri.go:89] found id: "2a94afd232256c9970e37e3077aaf55baec83c1b05f44ac0cb94c7d529e48160"
	I1029 08:24:07.725947   13279 cri.go:89] found id: ""
	I1029 08:24:07.725995   13279 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 08:24:07.743468   13279 out.go:203] 
	W1029 08:24:07.748609   13279 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:24:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:24:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 08:24:07.748637   13279 out.go:285] * 
	* 
	W1029 08:24:07.753042   13279 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:24:07.756521   13279 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-757691 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (40.29s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-757691 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-757691 --alsologtostderr -v=1: exit status 11 (265.10878ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:23:24.289845   11552 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:23:24.290006   11552 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:23:24.290012   11552 out.go:374] Setting ErrFile to fd 2...
	I1029 08:23:24.290016   11552 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:23:24.290430   11552 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 08:23:24.290897   11552 mustload.go:66] Loading cluster: addons-757691
	I1029 08:23:24.291964   11552 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:23:24.291993   11552 addons.go:607] checking whether the cluster is paused
	I1029 08:23:24.292156   11552 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:23:24.292176   11552 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:23:24.292659   11552 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:23:24.309532   11552 ssh_runner.go:195] Run: systemctl --version
	I1029 08:23:24.309593   11552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:23:24.330724   11552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:23:24.438975   11552 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:23:24.439094   11552 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:23:24.467825   11552 cri.go:89] found id: "ee8944794e8050551c59ad29f1e3e516d055471261079ddb98ad1b18d85f8d62"
	I1029 08:23:24.467844   11552 cri.go:89] found id: "32f7a28d2d03b12a04f38527066ab5cdace38391dbd7e81a25de50ac95ea189d"
	I1029 08:23:24.467849   11552 cri.go:89] found id: "239ec534461a096cf94705920f445c2256dd88aaa699d21479b90194a3837f9b"
	I1029 08:23:24.467853   11552 cri.go:89] found id: "0555333eb38f561643aa85f1253ffad88ad99d3734392074f633148511ce3081"
	I1029 08:23:24.467857   11552 cri.go:89] found id: "b7ebb9338f4b71874206cc6aa8143d99e673a9cca1b219506840b748ac705b60"
	I1029 08:23:24.467861   11552 cri.go:89] found id: "861cd9d17d1a25a1554adc0ae16a417206ae256ce09efb8acbb8fbdfd34b1733"
	I1029 08:23:24.467864   11552 cri.go:89] found id: "4f38205b7fd4d543287d30e2654b8b18c64c68ac9936ecc6de021a7f18188c65"
	I1029 08:23:24.467867   11552 cri.go:89] found id: "080445adfb2737e11888db144d48240f8f457851f5dd235ba8ac2de2d56a6f02"
	I1029 08:23:24.467870   11552 cri.go:89] found id: "525382941facb4662c4472842cc827c30b969d0ba588b1fe4bd1ab1a8be43d02"
	I1029 08:23:24.467875   11552 cri.go:89] found id: "c8fe768126de326968797194f6739f6b4dffc8edd42a7e3da422ab55d6c46d31"
	I1029 08:23:24.467879   11552 cri.go:89] found id: "444ef3af30aeb87e6a1cef7fe02d50c1eeb0628ff4d53cf0d6d76407448af653"
	I1029 08:23:24.467882   11552 cri.go:89] found id: "a89be2ad8c3cbb179996675c4f579261e541010fed42ffff33e36d897e051d6f"
	I1029 08:23:24.467885   11552 cri.go:89] found id: "03254ae94d330d94320842bf836194b38de9aa234ed810020b44739f573b3a1f"
	I1029 08:23:24.467888   11552 cri.go:89] found id: "380a55eebf3cdcc226730df7d2181cf069c2ff5fa31ba1bd7f7ecbdbb1a00c53"
	I1029 08:23:24.467891   11552 cri.go:89] found id: "dbc66dc27a6154e247feb539a4148136556f003707e138999e20759485b59218"
	I1029 08:23:24.467895   11552 cri.go:89] found id: "561fd8a7601359c5c1ac06320b6c023314bf2d9c888338eb6db0cb74cf760ad6"
	I1029 08:23:24.467898   11552 cri.go:89] found id: "bc4be5a012bc9f8e39fa97fa9dfd2e049f3d28d71ee13ad96c3db8f172403a78"
	I1029 08:23:24.467903   11552 cri.go:89] found id: "fb05a0521754d6e3abce78732cce5547c6dfcfddd236c0d82161786ca543e41b"
	I1029 08:23:24.467906   11552 cri.go:89] found id: "bdb041cabd34f35415d6aa99e1925090bda9745d10bfd7e1e4a7ce721cfb04de"
	I1029 08:23:24.467909   11552 cri.go:89] found id: "349c9103101d7725e278ac33a2d7d761e55f35837d834c1cec2dbbfe3add8d47"
	I1029 08:23:24.467913   11552 cri.go:89] found id: "6fb3b53c30069d80f0ce7ee16f7eedad1c380d15ce86f571d6bbe59e3f920970"
	I1029 08:23:24.467916   11552 cri.go:89] found id: "df417919fab6fd07c060b65a32c9220edeee697791536b0fa3a6e2baada5b377"
	I1029 08:23:24.467919   11552 cri.go:89] found id: "2a94afd232256c9970e37e3077aaf55baec83c1b05f44ac0cb94c7d529e48160"
	I1029 08:23:24.467922   11552 cri.go:89] found id: ""
	I1029 08:23:24.467969   11552 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 08:23:24.483269   11552 out.go:203] 
	W1029 08:23:24.486114   11552 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:23:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:23:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 08:23:24.486139   11552 out.go:285] * 
	* 
	W1029 08:23:24.490414   11552 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:23:24.493348   11552 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-757691 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-757691
helpers_test.go:243: (dbg) docker inspect addons-757691:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bf6f603e4d4f443578279c81f1a6dab5536260b406a0927d33375716db0cda33",
	        "Created": "2025-10-29T08:21:00.554043188Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 5703,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T08:21:00.623778281Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/bf6f603e4d4f443578279c81f1a6dab5536260b406a0927d33375716db0cda33/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bf6f603e4d4f443578279c81f1a6dab5536260b406a0927d33375716db0cda33/hostname",
	        "HostsPath": "/var/lib/docker/containers/bf6f603e4d4f443578279c81f1a6dab5536260b406a0927d33375716db0cda33/hosts",
	        "LogPath": "/var/lib/docker/containers/bf6f603e4d4f443578279c81f1a6dab5536260b406a0927d33375716db0cda33/bf6f603e4d4f443578279c81f1a6dab5536260b406a0927d33375716db0cda33-json.log",
	        "Name": "/addons-757691",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-757691:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-757691",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bf6f603e4d4f443578279c81f1a6dab5536260b406a0927d33375716db0cda33",
	                "LowerDir": "/var/lib/docker/overlay2/0343dbfabbff552f1b5518a68d37b37ac7bed7cbe479ac99b476cc92a9c688a3-init/diff:/var/lib/docker/overlay2/512c003c31c6c889d7180677d0a7d04f9641d651bb7c0f829b95ca7e47c2836c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0343dbfabbff552f1b5518a68d37b37ac7bed7cbe479ac99b476cc92a9c688a3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0343dbfabbff552f1b5518a68d37b37ac7bed7cbe479ac99b476cc92a9c688a3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0343dbfabbff552f1b5518a68d37b37ac7bed7cbe479ac99b476cc92a9c688a3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-757691",
	                "Source": "/var/lib/docker/volumes/addons-757691/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-757691",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-757691",
	                "name.minikube.sigs.k8s.io": "addons-757691",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5e700d532a9402dbf516f0e568893bb7dc91a62b88f9bd6512ec824d3c9df021",
	            "SandboxKey": "/var/run/docker/netns/5e700d532a94",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-757691": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:f8:84:6c:98:8a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fdc142442313fd40792fd7b16d636299c5bcbfc81c2066be50b2e2d2b3915e19",
	                    "EndpointID": "aa60f742795b48b6136657f03a241a7aa9362d6eb5e10ab2a35ccc1e76d01a8c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-757691",
	                        "bf6f603e4d4f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-757691 -n addons-757691
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-757691 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-757691 logs -n 25: (1.479105551s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-675275 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-675275   │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │ 29 Oct 25 08:20 UTC │
	│ delete  │ -p download-only-675275                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-675275   │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │ 29 Oct 25 08:20 UTC │
	│ start   │ -o=json --download-only -p download-only-968722 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-968722   │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │ 29 Oct 25 08:20 UTC │
	│ delete  │ -p download-only-968722                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-968722   │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │ 29 Oct 25 08:20 UTC │
	│ delete  │ -p download-only-675275                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-675275   │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │ 29 Oct 25 08:20 UTC │
	│ delete  │ -p download-only-968722                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-968722   │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │ 29 Oct 25 08:20 UTC │
	│ start   │ --download-only -p download-docker-024522 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-024522 │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │                     │
	│ delete  │ -p download-docker-024522                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-024522 │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │ 29 Oct 25 08:20 UTC │
	│ start   │ --download-only -p binary-mirror-301132 --alsologtostderr --binary-mirror http://127.0.0.1:43123 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-301132   │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │                     │
	│ delete  │ -p binary-mirror-301132                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-301132   │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │ 29 Oct 25 08:20 UTC │
	│ addons  │ disable dashboard -p addons-757691                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-757691          │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │                     │
	│ addons  │ enable dashboard -p addons-757691                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-757691          │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │                     │
	│ start   │ -p addons-757691 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-757691          │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │ 29 Oct 25 08:23 UTC │
	│ addons  │ addons-757691 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-757691          │ jenkins │ v1.37.0 │ 29 Oct 25 08:23 UTC │                     │
	│ addons  │ addons-757691 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-757691          │ jenkins │ v1.37.0 │ 29 Oct 25 08:23 UTC │                     │
	│ addons  │ enable headlamp -p addons-757691 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-757691          │ jenkins │ v1.37.0 │ 29 Oct 25 08:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 08:20:34
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 08:20:34.389589    5303 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:20:34.389717    5303 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:20:34.389727    5303 out.go:374] Setting ErrFile to fd 2...
	I1029 08:20:34.389733    5303 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:20:34.390441    5303 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 08:20:34.390957    5303 out.go:368] Setting JSON to false
	I1029 08:20:34.391691    5303 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":186,"bootTime":1761725848,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1029 08:20:34.391758    5303 start.go:143] virtualization:  
	I1029 08:20:34.395063    5303 out.go:179] * [addons-757691] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1029 08:20:34.398798    5303 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 08:20:34.398882    5303 notify.go:221] Checking for updates...
	I1029 08:20:34.404717    5303 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 08:20:34.407567    5303 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 08:20:34.410356    5303 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	I1029 08:20:34.413197    5303 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1029 08:20:34.416112    5303 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 08:20:34.419135    5303 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 08:20:34.450005    5303 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1029 08:20:34.450133    5303 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:20:34.506691    5303 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-29 08:20:34.497025876 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 08:20:34.506817    5303 docker.go:319] overlay module found
	I1029 08:20:34.510088    5303 out.go:179] * Using the docker driver based on user configuration
	I1029 08:20:34.513069    5303 start.go:309] selected driver: docker
	I1029 08:20:34.513092    5303 start.go:930] validating driver "docker" against <nil>
	I1029 08:20:34.513106    5303 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 08:20:34.513798    5303 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:20:34.577263    5303 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-29 08:20:34.567724839 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 08:20:34.577422    5303 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1029 08:20:34.577655    5303 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 08:20:34.580638    5303 out.go:179] * Using Docker driver with root privileges
	I1029 08:20:34.583505    5303 cni.go:84] Creating CNI manager for ""
	I1029 08:20:34.583565    5303 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 08:20:34.583577    5303 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1029 08:20:34.583670    5303 start.go:353] cluster config:
	{Name:addons-757691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-757691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1029 08:20:34.586722    5303 out.go:179] * Starting "addons-757691" primary control-plane node in "addons-757691" cluster
	I1029 08:20:34.589517    5303 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 08:20:34.592504    5303 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 08:20:34.595356    5303 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:20:34.595405    5303 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1029 08:20:34.595417    5303 cache.go:59] Caching tarball of preloaded images
	I1029 08:20:34.595506    5303 preload.go:233] Found /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1029 08:20:34.595521    5303 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 08:20:34.595846    5303 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/config.json ...
	I1029 08:20:34.595872    5303 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/config.json: {Name:mk483fc51061c028c7d42c844695485f626c1c3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:20:34.596038    5303 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 08:20:34.610967    5303 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1029 08:20:34.611079    5303 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1029 08:20:34.611102    5303 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1029 08:20:34.611110    5303 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1029 08:20:34.611118    5303 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1029 08:20:34.611124    5303 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1029 08:20:52.364681    5303 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1029 08:20:52.364724    5303 cache.go:233] Successfully downloaded all kic artifacts
	I1029 08:20:52.364755    5303 start.go:360] acquireMachinesLock for addons-757691: {Name:mk8f6dfa288988e6cf9ac15aaaee63ecff02dc5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 08:20:52.364876    5303 start.go:364] duration metric: took 99.293µs to acquireMachinesLock for "addons-757691"
	I1029 08:20:52.364910    5303 start.go:93] Provisioning new machine with config: &{Name:addons-757691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-757691 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 08:20:52.364999    5303 start.go:125] createHost starting for "" (driver="docker")
	I1029 08:20:52.368434    5303 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1029 08:20:52.368681    5303 start.go:159] libmachine.API.Create for "addons-757691" (driver="docker")
	I1029 08:20:52.368726    5303 client.go:173] LocalClient.Create starting
	I1029 08:20:52.368850    5303 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem
	I1029 08:20:53.095366    5303 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem
	I1029 08:20:53.527989    5303 cli_runner.go:164] Run: docker network inspect addons-757691 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1029 08:20:53.544511    5303 cli_runner.go:211] docker network inspect addons-757691 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1029 08:20:53.544591    5303 network_create.go:284] running [docker network inspect addons-757691] to gather additional debugging logs...
	I1029 08:20:53.544610    5303 cli_runner.go:164] Run: docker network inspect addons-757691
	W1029 08:20:53.560372    5303 cli_runner.go:211] docker network inspect addons-757691 returned with exit code 1
	I1029 08:20:53.560403    5303 network_create.go:287] error running [docker network inspect addons-757691]: docker network inspect addons-757691: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-757691 not found
	I1029 08:20:53.560429    5303 network_create.go:289] output of [docker network inspect addons-757691]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-757691 not found
	
	** /stderr **
	I1029 08:20:53.560538    5303 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 08:20:53.577238    5303 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018ebe70}
	I1029 08:20:53.577290    5303 network_create.go:124] attempt to create docker network addons-757691 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1029 08:20:53.577345    5303 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-757691 addons-757691
	I1029 08:20:53.633409    5303 network_create.go:108] docker network addons-757691 192.168.49.0/24 created
	I1029 08:20:53.633441    5303 kic.go:121] calculated static IP "192.168.49.2" for the "addons-757691" container
	I1029 08:20:53.633534    5303 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1029 08:20:53.650546    5303 cli_runner.go:164] Run: docker volume create addons-757691 --label name.minikube.sigs.k8s.io=addons-757691 --label created_by.minikube.sigs.k8s.io=true
	I1029 08:20:53.670256    5303 oci.go:103] Successfully created a docker volume addons-757691
	I1029 08:20:53.670346    5303 cli_runner.go:164] Run: docker run --rm --name addons-757691-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-757691 --entrypoint /usr/bin/test -v addons-757691:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1029 08:20:55.992225    5303 cli_runner.go:217] Completed: docker run --rm --name addons-757691-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-757691 --entrypoint /usr/bin/test -v addons-757691:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (2.321826911s)
	I1029 08:20:55.992254    5303 oci.go:107] Successfully prepared a docker volume addons-757691
	I1029 08:20:55.992278    5303 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:20:55.992295    5303 kic.go:194] Starting extracting preloaded images to volume ...
	I1029 08:20:55.992389    5303 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-757691:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1029 08:21:00.463875    5303 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-757691:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.471449436s)
	I1029 08:21:00.463917    5303 kic.go:203] duration metric: took 4.471616118s to extract preloaded images to volume ...
	W1029 08:21:00.464137    5303 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1029 08:21:00.464264    5303 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1029 08:21:00.537162    5303 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-757691 --name addons-757691 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-757691 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-757691 --network addons-757691 --ip 192.168.49.2 --volume addons-757691:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1029 08:21:00.887662    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Running}}
	I1029 08:21:00.910604    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:00.936037    5303 cli_runner.go:164] Run: docker exec addons-757691 stat /var/lib/dpkg/alternatives/iptables
	I1029 08:21:00.994545    5303 oci.go:144] the created container "addons-757691" has a running status.
	I1029 08:21:00.994574    5303 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa...
	I1029 08:21:02.082350    5303 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1029 08:21:02.108756    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:02.126538    5303 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1029 08:21:02.126560    5303 kic_runner.go:114] Args: [docker exec --privileged addons-757691 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1029 08:21:02.167522    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:02.188062    5303 machine.go:94] provisionDockerMachine start ...
	I1029 08:21:02.188172    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:02.205849    5303 main.go:143] libmachine: Using SSH client type: native
	I1029 08:21:02.206187    5303 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1029 08:21:02.206203    5303 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 08:21:02.356048    5303 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-757691
	
	I1029 08:21:02.356086    5303 ubuntu.go:182] provisioning hostname "addons-757691"
	I1029 08:21:02.356161    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:02.375115    5303 main.go:143] libmachine: Using SSH client type: native
	I1029 08:21:02.375427    5303 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1029 08:21:02.375439    5303 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-757691 && echo "addons-757691" | sudo tee /etc/hostname
	I1029 08:21:02.533957    5303 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-757691
	
	I1029 08:21:02.534037    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:02.552090    5303 main.go:143] libmachine: Using SSH client type: native
	I1029 08:21:02.552447    5303 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1029 08:21:02.552473    5303 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-757691' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-757691/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-757691' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 08:21:02.700735    5303 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 08:21:02.700830    5303 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-2763/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-2763/.minikube}
	I1029 08:21:02.700887    5303 ubuntu.go:190] setting up certificates
	I1029 08:21:02.700921    5303 provision.go:84] configureAuth start
	I1029 08:21:02.701018    5303 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-757691
	I1029 08:21:02.718349    5303 provision.go:143] copyHostCerts
	I1029 08:21:02.718431    5303 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem (1082 bytes)
	I1029 08:21:02.718549    5303 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem (1123 bytes)
	I1029 08:21:02.718613    5303 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem (1679 bytes)
	I1029 08:21:02.718659    5303 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem org=jenkins.addons-757691 san=[127.0.0.1 192.168.49.2 addons-757691 localhost minikube]
	I1029 08:21:02.952766    5303 provision.go:177] copyRemoteCerts
	I1029 08:21:02.952847    5303 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 08:21:02.952888    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:02.970015    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:03.075996    5303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 08:21:03.093709    5303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1029 08:21:03.111830    5303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 08:21:03.129239    5303 provision.go:87] duration metric: took 428.29073ms to configureAuth
	I1029 08:21:03.129263    5303 ubuntu.go:206] setting minikube options for container-runtime
	I1029 08:21:03.129449    5303 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:21:03.129555    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:03.147491    5303 main.go:143] libmachine: Using SSH client type: native
	I1029 08:21:03.147801    5303 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1029 08:21:03.147815    5303 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 08:21:03.403990    5303 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 08:21:03.404007    5303 machine.go:97] duration metric: took 1.215919808s to provisionDockerMachine
	I1029 08:21:03.404017    5303 client.go:176] duration metric: took 11.035279731s to LocalClient.Create
	I1029 08:21:03.404030    5303 start.go:167] duration metric: took 11.035352118s to libmachine.API.Create "addons-757691"
	I1029 08:21:03.404038    5303 start.go:293] postStartSetup for "addons-757691" (driver="docker")
	I1029 08:21:03.404048    5303 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 08:21:03.404125    5303 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 08:21:03.404167    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:03.427208    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:03.532438    5303 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 08:21:03.535683    5303 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 08:21:03.535713    5303 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 08:21:03.535725    5303 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/addons for local assets ...
	I1029 08:21:03.535794    5303 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/files for local assets ...
	I1029 08:21:03.535821    5303 start.go:296] duration metric: took 131.777723ms for postStartSetup
	I1029 08:21:03.536162    5303 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-757691
	I1029 08:21:03.553350    5303 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/config.json ...
	I1029 08:21:03.553638    5303 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 08:21:03.553692    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:03.571554    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:03.673127    5303 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 08:21:03.677880    5303 start.go:128] duration metric: took 11.312866034s to createHost
	I1029 08:21:03.677905    5303 start.go:83] releasing machines lock for "addons-757691", held for 11.313013326s
	I1029 08:21:03.677973    5303 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-757691
	I1029 08:21:03.696042    5303 ssh_runner.go:195] Run: cat /version.json
	I1029 08:21:03.696119    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:03.696396    5303 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 08:21:03.696457    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:03.719562    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:03.723286    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:03.824201    5303 ssh_runner.go:195] Run: systemctl --version
	I1029 08:21:03.919108    5303 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 08:21:03.958568    5303 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 08:21:03.963034    5303 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 08:21:03.963122    5303 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 08:21:03.992611    5303 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1029 08:21:03.992633    5303 start.go:496] detecting cgroup driver to use...
	I1029 08:21:03.992670    5303 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1029 08:21:03.992756    5303 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 08:21:04.013637    5303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 08:21:04.027192    5303 docker.go:218] disabling cri-docker service (if available) ...
	I1029 08:21:04.027254    5303 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 08:21:04.045358    5303 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 08:21:04.063801    5303 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 08:21:04.185562    5303 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 08:21:04.305550    5303 docker.go:234] disabling docker service ...
	I1029 08:21:04.305681    5303 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 08:21:04.325555    5303 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 08:21:04.338645    5303 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 08:21:04.460546    5303 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 08:21:04.579318    5303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 08:21:04.591488    5303 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 08:21:04.605202    5303 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 08:21:04.605281    5303 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:21:04.614346    5303 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1029 08:21:04.614425    5303 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:21:04.623194    5303 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:21:04.631780    5303 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:21:04.640944    5303 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 08:21:04.649197    5303 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:21:04.657882    5303 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:21:04.671328    5303 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:21:04.679937    5303 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 08:21:04.687394    5303 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1029 08:21:04.687459    5303 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1029 08:21:04.701946    5303 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 08:21:04.709355    5303 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:21:04.832603    5303 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 08:21:04.973274    5303 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 08:21:04.973427    5303 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 08:21:04.977126    5303 start.go:564] Will wait 60s for crictl version
	I1029 08:21:04.977234    5303 ssh_runner.go:195] Run: which crictl
	I1029 08:21:04.980613    5303 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 08:21:05.005905    5303 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 08:21:05.006072    5303 ssh_runner.go:195] Run: crio --version
	I1029 08:21:05.038988    5303 ssh_runner.go:195] Run: crio --version
	I1029 08:21:05.068954    5303 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 08:21:05.071881    5303 cli_runner.go:164] Run: docker network inspect addons-757691 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 08:21:05.088660    5303 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1029 08:21:05.092508    5303 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:21:05.102214    5303 kubeadm.go:884] updating cluster {Name:addons-757691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-757691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 08:21:05.102334    5303 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:21:05.102391    5303 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 08:21:05.138936    5303 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 08:21:05.138960    5303 crio.go:433] Images already preloaded, skipping extraction
	I1029 08:21:05.139021    5303 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 08:21:05.165053    5303 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 08:21:05.165076    5303 cache_images.go:86] Images are preloaded, skipping loading
	I1029 08:21:05.165086    5303 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1029 08:21:05.165214    5303 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-757691 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-757691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 08:21:05.165303    5303 ssh_runner.go:195] Run: crio config
	I1029 08:21:05.218862    5303 cni.go:84] Creating CNI manager for ""
	I1029 08:21:05.218888    5303 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 08:21:05.218906    5303 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1029 08:21:05.218930    5303 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-757691 NodeName:addons-757691 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 08:21:05.219055    5303 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-757691"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 08:21:05.219135    5303 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 08:21:05.226698    5303 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 08:21:05.226807    5303 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 08:21:05.234162    5303 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1029 08:21:05.246806    5303 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 08:21:05.259843    5303 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1029 08:21:05.272170    5303 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1029 08:21:05.275793    5303 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:21:05.285344    5303 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:21:05.402628    5303 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 08:21:05.418124    5303 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691 for IP: 192.168.49.2
	I1029 08:21:05.418193    5303 certs.go:195] generating shared ca certs ...
	I1029 08:21:05.418227    5303 certs.go:227] acquiring lock for ca certs: {Name:mk715611293338c39436fe9072c5ce0c2fb993a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:05.418376    5303 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key
	I1029 08:21:05.709824    5303 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt ...
	I1029 08:21:05.709856    5303 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt: {Name:mk72169ccc25d4f6f0cad61bec2049a2dde9625a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:05.710080    5303 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key ...
	I1029 08:21:05.710096    5303 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key: {Name:mkae08a7d3fefa5e6571e0738456d0b61fd12ce0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:05.710189    5303 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key
	I1029 08:21:05.871204    5303 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt ...
	I1029 08:21:05.871234    5303 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt: {Name:mkf21d0bebeaa7c7b9c32d969e54b889f5ddf480 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:05.871399    5303 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key ...
	I1029 08:21:05.871414    5303 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key: {Name:mkedc9b0619550237fb62c786cd16da5244a6baa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:05.871493    5303 certs.go:257] generating profile certs ...
	I1029 08:21:05.871555    5303 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.key
	I1029 08:21:05.871573    5303 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt with IP's: []
	I1029 08:21:05.976304    5303 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt ...
	I1029 08:21:05.976337    5303 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: {Name:mkb3d88a06621a28a140eadcc69a46fa07f7f7f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:05.976528    5303 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.key ...
	I1029 08:21:05.976542    5303 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.key: {Name:mka6c52d70446e43df54bc9e976975be9ab1708c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:05.976624    5303 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/apiserver.key.38555e1e
	I1029 08:21:05.976648    5303 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/apiserver.crt.38555e1e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1029 08:21:06.296197    5303 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/apiserver.crt.38555e1e ...
	I1029 08:21:06.296228    5303 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/apiserver.crt.38555e1e: {Name:mk420f0377df64628682fdeb88f7df4473686247 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:06.296421    5303 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/apiserver.key.38555e1e ...
	I1029 08:21:06.296436    5303 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/apiserver.key.38555e1e: {Name:mk6bf3445c2ecf0a18c24fb42b640fb4db7eafeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:06.296521    5303 certs.go:382] copying /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/apiserver.crt.38555e1e -> /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/apiserver.crt
	I1029 08:21:06.296600    5303 certs.go:386] copying /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/apiserver.key.38555e1e -> /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/apiserver.key
	I1029 08:21:06.296660    5303 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/proxy-client.key
	I1029 08:21:06.296684    5303 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/proxy-client.crt with IP's: []
	I1029 08:21:06.833764    5303 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/proxy-client.crt ...
	I1029 08:21:06.833793    5303 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/proxy-client.crt: {Name:mk7b299db1251ef2ab798abf32d639d45537eb34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:06.833964    5303 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/proxy-client.key ...
	I1029 08:21:06.833975    5303 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/proxy-client.key: {Name:mk0494e471f0e51e062592824e50500af09883dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:06.834196    5303 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem (1679 bytes)
	I1029 08:21:06.834235    5303 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem (1082 bytes)
	I1029 08:21:06.834263    5303 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem (1123 bytes)
	I1029 08:21:06.834294    5303 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem (1679 bytes)
	I1029 08:21:06.834916    5303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 08:21:06.862520    5303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 08:21:06.882250    5303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 08:21:06.903616    5303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1029 08:21:06.922467    5303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1029 08:21:06.940166    5303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 08:21:06.957558    5303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 08:21:06.974121    5303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1029 08:21:06.991516    5303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 08:21:07.010895    5303 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 08:21:07.024094    5303 ssh_runner.go:195] Run: openssl version
	I1029 08:21:07.030497    5303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 08:21:07.039085    5303 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:21:07.042708    5303 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:21:07.042800    5303 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:21:07.083544    5303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 08:21:07.091607    5303 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 08:21:07.094913    5303 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1029 08:21:07.094960    5303 kubeadm.go:401] StartCluster: {Name:addons-757691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-757691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 08:21:07.095065    5303 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:21:07.095135    5303 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:21:07.123146    5303 cri.go:89] found id: ""
	I1029 08:21:07.123263    5303 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 08:21:07.130888    5303 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1029 08:21:07.138483    5303 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1029 08:21:07.138601    5303 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1029 08:21:07.146512    5303 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1029 08:21:07.146578    5303 kubeadm.go:158] found existing configuration files:
	
	I1029 08:21:07.146634    5303 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1029 08:21:07.154264    5303 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1029 08:21:07.154326    5303 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1029 08:21:07.161577    5303 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1029 08:21:07.169202    5303 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1029 08:21:07.169348    5303 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1029 08:21:07.176271    5303 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1029 08:21:07.183662    5303 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1029 08:21:07.183752    5303 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1029 08:21:07.190882    5303 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1029 08:21:07.198232    5303 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1029 08:21:07.198311    5303 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1029 08:21:07.205223    5303 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1029 08:21:07.243360    5303 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1029 08:21:07.243599    5303 kubeadm.go:319] [preflight] Running pre-flight checks
	I1029 08:21:07.268248    5303 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1029 08:21:07.268411    5303 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1029 08:21:07.268488    5303 kubeadm.go:319] OS: Linux
	I1029 08:21:07.268568    5303 kubeadm.go:319] CGROUPS_CPU: enabled
	I1029 08:21:07.268658    5303 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1029 08:21:07.268756    5303 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1029 08:21:07.268815    5303 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1029 08:21:07.268869    5303 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1029 08:21:07.268922    5303 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1029 08:21:07.268972    5303 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1029 08:21:07.269027    5303 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1029 08:21:07.269078    5303 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1029 08:21:07.333493    5303 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1029 08:21:07.333665    5303 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1029 08:21:07.333800    5303 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1029 08:21:07.343717    5303 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1029 08:21:07.350353    5303 out.go:252]   - Generating certificates and keys ...
	I1029 08:21:07.350464    5303 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1029 08:21:07.350546    5303 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1029 08:21:07.865851    5303 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1029 08:21:08.065573    5303 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1029 08:21:08.234567    5303 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1029 08:21:08.607398    5303 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1029 08:21:08.959531    5303 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1029 08:21:08.959926    5303 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-757691 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1029 08:21:09.485625    5303 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1029 08:21:09.485982    5303 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-757691 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1029 08:21:10.044126    5303 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1029 08:21:10.225313    5303 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1029 08:21:11.610594    5303 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1029 08:21:11.610885    5303 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1029 08:21:12.903936    5303 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1029 08:21:13.674659    5303 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1029 08:21:14.082470    5303 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1029 08:21:14.539988    5303 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1029 08:21:14.609614    5303 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1029 08:21:14.610212    5303 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1029 08:21:14.612982    5303 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1029 08:21:14.616259    5303 out.go:252]   - Booting up control plane ...
	I1029 08:21:14.616393    5303 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1029 08:21:14.616491    5303 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1029 08:21:14.617498    5303 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1029 08:21:14.632782    5303 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1029 08:21:14.633161    5303 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1029 08:21:14.641558    5303 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1029 08:21:14.641894    5303 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1029 08:21:14.642074    5303 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1029 08:21:14.771305    5303 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1029 08:21:14.771456    5303 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1029 08:21:15.272480    5303 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.52914ms
	I1029 08:21:15.275967    5303 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1029 08:21:15.276064    5303 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1029 08:21:15.276418    5303 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1029 08:21:15.276582    5303 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1029 08:21:19.641110    5303 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.364710349s
	I1029 08:21:20.058013    5303 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.781967061s
	I1029 08:21:21.777587    5303 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501511678s
	I1029 08:21:21.797564    5303 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1029 08:21:21.812798    5303 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1029 08:21:21.827634    5303 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1029 08:21:21.827900    5303 kubeadm.go:319] [mark-control-plane] Marking the node addons-757691 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1029 08:21:21.840846    5303 kubeadm.go:319] [bootstrap-token] Using token: k6kkly.9wi997fhhyt35ncy
	I1029 08:21:21.845941    5303 out.go:252]   - Configuring RBAC rules ...
	I1029 08:21:21.846086    5303 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1029 08:21:21.847552    5303 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1029 08:21:21.855219    5303 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1029 08:21:21.861410    5303 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1029 08:21:21.865309    5303 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1029 08:21:21.869320    5303 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1029 08:21:22.184244    5303 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1029 08:21:22.620060    5303 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1029 08:21:23.184553    5303 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1029 08:21:23.185610    5303 kubeadm.go:319] 
	I1029 08:21:23.185707    5303 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1029 08:21:23.185719    5303 kubeadm.go:319] 
	I1029 08:21:23.185819    5303 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1029 08:21:23.185832    5303 kubeadm.go:319] 
	I1029 08:21:23.185859    5303 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1029 08:21:23.185921    5303 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1029 08:21:23.185974    5303 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1029 08:21:23.185979    5303 kubeadm.go:319] 
	I1029 08:21:23.186036    5303 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1029 08:21:23.186040    5303 kubeadm.go:319] 
	I1029 08:21:23.186089    5303 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1029 08:21:23.186094    5303 kubeadm.go:319] 
	I1029 08:21:23.186149    5303 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1029 08:21:23.186228    5303 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1029 08:21:23.186299    5303 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1029 08:21:23.186304    5303 kubeadm.go:319] 
	I1029 08:21:23.186401    5303 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1029 08:21:23.186488    5303 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1029 08:21:23.186493    5303 kubeadm.go:319] 
	I1029 08:21:23.186580    5303 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token k6kkly.9wi997fhhyt35ncy \
	I1029 08:21:23.186695    5303 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da4a5b90580f0f492e24f667f5676cec258425f736b389045aee440db981859e \
	I1029 08:21:23.186717    5303 kubeadm.go:319] 	--control-plane 
	I1029 08:21:23.186723    5303 kubeadm.go:319] 
	I1029 08:21:23.186811    5303 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1029 08:21:23.186817    5303 kubeadm.go:319] 
	I1029 08:21:23.186902    5303 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token k6kkly.9wi997fhhyt35ncy \
	I1029 08:21:23.187009    5303 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da4a5b90580f0f492e24f667f5676cec258425f736b389045aee440db981859e 
	I1029 08:21:23.189464    5303 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1029 08:21:23.189729    5303 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1029 08:21:23.189855    5303 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1029 08:21:23.189866    5303 cni.go:84] Creating CNI manager for ""
	I1029 08:21:23.189874    5303 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 08:21:23.193048    5303 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1029 08:21:23.195904    5303 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1029 08:21:23.199610    5303 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1029 08:21:23.199627    5303 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1029 08:21:23.211972    5303 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1029 08:21:23.509499    5303 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1029 08:21:23.509594    5303 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:21:23.509624    5303 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-757691 minikube.k8s.io/updated_at=2025_10_29T08_21_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac minikube.k8s.io/name=addons-757691 minikube.k8s.io/primary=true
	I1029 08:21:23.649350    5303 ops.go:34] apiserver oom_adj: -16
	I1029 08:21:23.649492    5303 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:21:24.149539    5303 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:21:24.649635    5303 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:21:25.149625    5303 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:21:25.649769    5303 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:21:26.150337    5303 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:21:26.650528    5303 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:21:27.150173    5303 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:21:27.649644    5303 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:21:28.150384    5303 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:21:28.284785    5303 kubeadm.go:1114] duration metric: took 4.775256007s to wait for elevateKubeSystemPrivileges
	I1029 08:21:28.284818    5303 kubeadm.go:403] duration metric: took 21.189860871s to StartCluster
	I1029 08:21:28.284835    5303 settings.go:142] acquiring lock: {Name:mkeba1c0d8e3656c2a05b6b6f1f81184498df216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:28.284942    5303 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 08:21:28.285309    5303 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:28.285499    5303 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 08:21:28.285663    5303 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1029 08:21:28.285918    5303 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:21:28.285946    5303 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1029 08:21:28.286018    5303 addons.go:70] Setting yakd=true in profile "addons-757691"
	I1029 08:21:28.286031    5303 addons.go:239] Setting addon yakd=true in "addons-757691"
	I1029 08:21:28.286052    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.286500    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.287090    5303 addons.go:70] Setting metrics-server=true in profile "addons-757691"
	I1029 08:21:28.287109    5303 addons.go:239] Setting addon metrics-server=true in "addons-757691"
	I1029 08:21:28.287132    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.287535    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.288792    5303 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-757691"
	I1029 08:21:28.288865    5303 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-757691"
	I1029 08:21:28.288906    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.291181    5303 addons.go:70] Setting registry=true in profile "addons-757691"
	I1029 08:21:28.291410    5303 addons.go:239] Setting addon registry=true in "addons-757691"
	I1029 08:21:28.291691    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.292209    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.293100    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.291324    5303 addons.go:70] Setting registry-creds=true in profile "addons-757691"
	I1029 08:21:28.298563    5303 addons.go:239] Setting addon registry-creds=true in "addons-757691"
	I1029 08:21:28.298606    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.299068    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.293243    5303 out.go:179] * Verifying Kubernetes components...
	I1029 08:21:28.290727    5303 addons.go:70] Setting cloud-spanner=true in profile "addons-757691"
	I1029 08:21:28.304715    5303 addons.go:239] Setting addon cloud-spanner=true in "addons-757691"
	I1029 08:21:28.304786    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.305329    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.316151    5303 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:21:28.290733    5303 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-757691"
	I1029 08:21:28.317947    5303 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-757691"
	I1029 08:21:28.317982    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.318442    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.290740    5303 addons.go:70] Setting default-storageclass=true in profile "addons-757691"
	I1029 08:21:28.329550    5303 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-757691"
	I1029 08:21:28.329879    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.290744    5303 addons.go:70] Setting gcp-auth=true in profile "addons-757691"
	I1029 08:21:28.345186    5303 mustload.go:66] Loading cluster: addons-757691
	I1029 08:21:28.345393    5303 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:21:28.345635    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.290747    5303 addons.go:70] Setting ingress=true in profile "addons-757691"
	I1029 08:21:28.353863    5303 addons.go:239] Setting addon ingress=true in "addons-757691"
	I1029 08:21:28.353941    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.354479    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.290750    5303 addons.go:70] Setting ingress-dns=true in profile "addons-757691"
	I1029 08:21:28.388720    5303 addons.go:239] Setting addon ingress-dns=true in "addons-757691"
	I1029 08:21:28.388780    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.389228    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.399832    5303 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1029 08:21:28.290754    5303 addons.go:70] Setting inspektor-gadget=true in profile "addons-757691"
	I1029 08:21:28.405516    5303 addons.go:239] Setting addon inspektor-gadget=true in "addons-757691"
	I1029 08:21:28.405555    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.406013    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.413037    5303 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1029 08:21:28.413067    5303 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1029 08:21:28.413133    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:28.434997    5303 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1029 08:21:28.291330    5303 addons.go:70] Setting storage-provisioner=true in profile "addons-757691"
	I1029 08:21:28.436423    5303 addons.go:239] Setting addon storage-provisioner=true in "addons-757691"
	I1029 08:21:28.436462    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.436955    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.440727    5303 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1029 08:21:28.440749    5303 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1029 08:21:28.440825    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:28.291334    5303 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-757691"
	I1029 08:21:28.472582    5303 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-757691"
	I1029 08:21:28.472894    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.291337    5303 addons.go:70] Setting volcano=true in profile "addons-757691"
	I1029 08:21:28.484866    5303 addons.go:239] Setting addon volcano=true in "addons-757691"
	I1029 08:21:28.484904    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.485392    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.291340    5303 addons.go:70] Setting volumesnapshots=true in profile "addons-757691"
	I1029 08:21:28.505712    5303 addons.go:239] Setting addon volumesnapshots=true in "addons-757691"
	I1029 08:21:28.505752    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.506225    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.290717    5303 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-757691"
	I1029 08:21:28.516727    5303 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-757691"
	I1029 08:21:28.516776    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.517242    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.559045    5303 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1029 08:21:28.564489    5303 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1029 08:21:28.564516    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1029 08:21:28.564583    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:28.577825    5303 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1029 08:21:28.607862    5303 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1029 08:21:28.611632    5303 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1029 08:21:28.611766    5303 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1029 08:21:28.611821    5303 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1029 08:21:28.642730    5303 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1029 08:21:28.642808    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1029 08:21:28.642909    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:28.650234    5303 addons.go:239] Setting addon default-storageclass=true in "addons-757691"
	I1029 08:21:28.650328    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.650928    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.611918    5303 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1029 08:21:28.658885    5303 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1029 08:21:28.659563    5303 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1029 08:21:28.660840    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:28.660924    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.662494    5303 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1029 08:21:28.662501    5303 out.go:179]   - Using image docker.io/registry:3.0.0
	I1029 08:21:28.662538    5303 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1029 08:21:28.668466    5303 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1029 08:21:28.669662    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1029 08:21:28.669738    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:28.692457    5303 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1029 08:21:28.692533    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1029 08:21:28.692644    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:28.724141    5303 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-757691"
	I1029 08:21:28.724245    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:28.724960    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:28.754551    5303 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1029 08:21:28.754726    5303 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 08:21:28.758316    5303 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1029 08:21:28.758553    5303 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 08:21:28.758594    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1029 08:21:28.758687    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:28.765502    5303 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1029 08:21:28.768459    5303 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1029 08:21:28.769213    5303 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1029 08:21:28.772153    5303 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1029 08:21:28.772289    5303 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1029 08:21:28.772342    5303 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1029 08:21:28.772455    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:28.781002    5303 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1029 08:21:28.781032    5303 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1029 08:21:28.781097    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:28.804623    5303 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1029 08:21:28.804644    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1029 08:21:28.804717    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	W1029 08:21:28.839899    5303 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1029 08:21:28.840417    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:28.843472    5303 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 08:21:28.843981    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:28.849680    5303 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1029 08:21:28.849822    5303 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1029 08:21:28.850302    5303 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1029 08:21:28.854136    5303 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1029 08:21:28.854159    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1029 08:21:28.854226    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:28.854422    5303 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1029 08:21:28.854432    5303 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1029 08:21:28.854481    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:28.880804    5303 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1029 08:21:28.880823    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1029 08:21:28.880879    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:28.922592    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:28.923779    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:28.940569    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:28.946850    5303 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1029 08:21:28.946870    5303 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1029 08:21:28.946930    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:28.968418    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:28.994088    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:28.996129    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:29.006144    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:29.007297    5303 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1029 08:21:29.009933    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:29.015607    5303 out.go:179]   - Using image docker.io/busybox:stable
	I1029 08:21:29.021148    5303 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1029 08:21:29.021173    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1029 08:21:29.021242    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:29.066156    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:29.069254    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:29.076529    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	W1029 08:21:29.077302    5303 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1029 08:21:29.077330    5303 retry.go:31] will retry after 294.845567ms: ssh: handshake failed: EOF
	W1029 08:21:29.083265    5303 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1029 08:21:29.083291    5303 retry.go:31] will retry after 242.646007ms: ssh: handshake failed: EOF
	I1029 08:21:29.095202    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:29.450130    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1029 08:21:29.452268    5303 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1029 08:21:29.452425    5303 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1029 08:21:29.478528    5303 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1029 08:21:29.478606    5303 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1029 08:21:29.494687    5303 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:29.494760    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1029 08:21:29.519787    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1029 08:21:29.531382    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1029 08:21:29.535743    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1029 08:21:29.637123    5303 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1029 08:21:29.637206    5303 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1029 08:21:29.648222    5303 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1029 08:21:29.648295    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1029 08:21:29.652133    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1029 08:21:29.683485    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 08:21:29.689346    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:29.698834    5303 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1029 08:21:29.698909    5303 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1029 08:21:29.703898    5303 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1029 08:21:29.703972    5303 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1029 08:21:29.754875    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1029 08:21:29.757287    5303 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1029 08:21:29.757308    5303 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1029 08:21:29.785151    5303 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1029 08:21:29.785225    5303 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1029 08:21:29.787712    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1029 08:21:29.801990    5303 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1029 08:21:29.802065    5303 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1029 08:21:29.831807    5303 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1029 08:21:29.831882    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1029 08:21:29.870708    5303 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1029 08:21:29.870772    5303 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1029 08:21:29.909250    5303 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1029 08:21:29.909330    5303 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1029 08:21:29.954008    5303 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1029 08:21:29.954086    5303 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1029 08:21:29.982981    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1029 08:21:29.984152    5303 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1029 08:21:29.984224    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1029 08:21:30.102500    5303 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1029 08:21:30.102574    5303 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1029 08:21:30.158012    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1029 08:21:30.166576    5303 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1029 08:21:30.166656    5303 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1029 08:21:30.168750    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1029 08:21:30.196519    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1029 08:21:30.220289    5303 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1029 08:21:30.220386    5303 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1029 08:21:30.284286    5303 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.624693372s)
	I1029 08:21:30.284531    5303 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1029 08:21:30.284472    5303 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.440979295s)
	I1029 08:21:30.285385    5303 node_ready.go:35] waiting up to 6m0s for node "addons-757691" to be "Ready" ...
	I1029 08:21:30.352145    5303 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1029 08:21:30.352231    5303 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1029 08:21:30.444793    5303 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1029 08:21:30.444865    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1029 08:21:30.589275    5303 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1029 08:21:30.589348    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1029 08:21:30.695340    5303 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1029 08:21:30.695413    5303 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1029 08:21:30.788922    5303 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-757691" context rescaled to 1 replicas
	I1029 08:21:30.844829    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1029 08:21:30.915429    5303 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1029 08:21:30.915509    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1029 08:21:31.108295    5303 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1029 08:21:31.108394    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1029 08:21:31.282959    5303 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1029 08:21:31.283033    5303 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1029 08:21:31.453243    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1029 08:21:32.310941    5303 node_ready.go:57] node "addons-757691" has "Ready":"False" status (will retry)
	I1029 08:21:33.121204    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.601333119s)
	I1029 08:21:34.357668    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.826202652s)
	I1029 08:21:34.357750    5303 addons.go:480] Verifying addon ingress=true in "addons-757691"
	I1029 08:21:34.358157    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.674603312s)
	I1029 08:21:34.357908    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.82208598s)
	I1029 08:21:34.357932    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.705721433s)
	I1029 08:21:34.358309    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.668897236s)
	W1029 08:21:34.358325    5303 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:34.358339    5303 retry.go:31] will retry after 351.870836ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:34.358392    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.603498327s)
	I1029 08:21:34.358421    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.570641107s)
	I1029 08:21:34.358456    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.375405175s)
	I1029 08:21:34.358463    5303 addons.go:480] Verifying addon registry=true in "addons-757691"
	I1029 08:21:34.358682    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.200597336s)
	I1029 08:21:34.358729    5303 addons.go:480] Verifying addon metrics-server=true in "addons-757691"
	I1029 08:21:34.358800    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.189976737s)
	I1029 08:21:34.358854    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.162255651s)
	I1029 08:21:34.359160    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.514251823s)
	W1029 08:21:34.359188    5303 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1029 08:21:34.359202    5303 retry.go:31] will retry after 131.577103ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1029 08:21:34.363044    5303 out.go:179] * Verifying ingress addon...
	I1029 08:21:34.364953    5303 out.go:179] * Verifying registry addon...
	I1029 08:21:34.365080    5303 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-757691 service yakd-dashboard -n yakd-dashboard
	
	I1029 08:21:34.368932    5303 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1029 08:21:34.368985    5303 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1029 08:21:34.377415    5303 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1029 08:21:34.377434    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:34.383054    5303 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1029 08:21:34.383075    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:34.491135    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1029 08:21:34.706889    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.253551589s)
	I1029 08:21:34.706925    5303 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-757691"
	I1029 08:21:34.710000    5303 out.go:179] * Verifying csi-hostpath-driver addon...
	I1029 08:21:34.710422    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:34.713604    5303 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1029 08:21:34.723180    5303 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1029 08:21:34.723206    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:21:34.789265    5303 node_ready.go:57] node "addons-757691" has "Ready":"False" status (will retry)
	I1029 08:21:34.876962    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:34.878183    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:35.217358    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:35.373630    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:35.374557    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:35.717570    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:35.874885    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:35.875037    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:36.216754    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:36.276513    5303 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1029 08:21:36.276610    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:36.294850    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:36.372569    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:36.372726    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:36.417430    5303 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1029 08:21:36.430273    5303 addons.go:239] Setting addon gcp-auth=true in "addons-757691"
	I1029 08:21:36.430317    5303 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:21:36.430769    5303 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:21:36.451089    5303 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1029 08:21:36.451138    5303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:21:36.471447    5303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:21:36.716672    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:36.872204    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:36.872585    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:37.217493    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:21:37.289473    5303 node_ready.go:57] node "addons-757691" has "Ready":"False" status (will retry)
	I1029 08:21:37.339228    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.848048819s)
	I1029 08:21:37.339341    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.628894739s)
	W1029 08:21:37.339379    5303 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:37.339403    5303 retry.go:31] will retry after 325.928364ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:37.342525    5303 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1029 08:21:37.345448    5303 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1029 08:21:37.348245    5303 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1029 08:21:37.348266    5303 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1029 08:21:37.361452    5303 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1029 08:21:37.361517    5303 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1029 08:21:37.373879    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:37.374532    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:37.377337    5303 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1029 08:21:37.377357    5303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1029 08:21:37.390248    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1029 08:21:37.665625    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:37.717274    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:37.885725    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:37.905942    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:37.969973    5303 addons.go:480] Verifying addon gcp-auth=true in "addons-757691"
	I1029 08:21:37.973037    5303 out.go:179] * Verifying gcp-auth addon...
	I1029 08:21:37.976842    5303 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1029 08:21:37.995443    5303 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1029 08:21:37.995479    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:38.217583    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:38.372647    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:38.373243    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:38.480434    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1029 08:21:38.642917    5303 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:38.642949    5303 retry.go:31] will retry after 480.232558ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:38.716961    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:38.872226    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:38.872623    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:38.980487    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:39.123618    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:39.217638    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:39.373252    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:39.373940    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:39.480434    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:39.717973    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:21:39.789143    5303 node_ready.go:57] node "addons-757691" has "Ready":"False" status (will retry)
	I1029 08:21:39.873516    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:39.874769    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:39.945035    5303 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:39.945064    5303 retry.go:31] will retry after 590.773258ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:39.979927    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:40.216682    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:40.372811    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:40.373163    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:40.480120    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:40.536330    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:40.718078    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:40.874274    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:40.875000    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:40.979919    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:41.217197    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:21:41.367576    5303 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:41.367607    5303 retry.go:31] will retry after 976.675145ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:41.372528    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:41.372967    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:41.479646    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:41.716845    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:41.873564    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:41.873767    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:41.980246    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:42.217643    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:21:42.288785    5303 node_ready.go:57] node "addons-757691" has "Ready":"False" status (will retry)
	I1029 08:21:42.345006    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:42.374067    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:42.374808    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:42.480297    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:42.716933    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:42.874025    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:42.874718    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:42.980496    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1029 08:21:43.169181    5303 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:43.169224    5303 retry.go:31] will retry after 2.610484783s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:43.217333    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:43.371763    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:43.372254    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:43.480489    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:43.716910    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:43.872838    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:43.872998    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:43.979838    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:44.216837    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:44.372767    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:44.373380    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:44.480356    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:44.717037    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:21:44.789007    5303 node_ready.go:57] node "addons-757691" has "Ready":"False" status (will retry)
	I1029 08:21:44.872939    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:44.873076    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:44.979832    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:45.227318    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:45.374026    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:45.374541    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:45.480507    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:45.717811    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:45.779863    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:45.872636    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:45.873415    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:45.980122    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:46.216825    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:46.372509    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:46.372884    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:46.479737    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1029 08:21:46.574130    5303 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:46.574162    5303 retry.go:31] will retry after 3.939041515s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:46.716961    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:46.872604    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:46.872787    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:46.980625    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:47.216846    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:21:47.288697    5303 node_ready.go:57] node "addons-757691" has "Ready":"False" status (will retry)
	I1029 08:21:47.373173    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:47.373554    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:47.480491    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:47.717546    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:47.872349    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:47.872474    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:47.980304    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:48.218620    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:48.372950    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:48.373456    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:48.480177    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:48.717328    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:48.871924    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:48.872100    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:48.980252    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:49.217625    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:49.372419    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:49.372636    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:49.479893    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:49.717349    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:21:49.789116    5303 node_ready.go:57] node "addons-757691" has "Ready":"False" status (will retry)
	I1029 08:21:49.872053    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:49.872167    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:49.980441    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:50.217545    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:50.371813    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:50.371992    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:50.479833    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:50.514013    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:50.716570    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:50.873673    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:50.874631    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:50.980751    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:51.217825    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:21:51.372041    5303 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:51.372073    5303 retry.go:31] will retry after 3.541014901s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:51.374407    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:51.374537    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:51.480642    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:51.716980    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:51.872828    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:51.873033    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:51.979986    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:52.216873    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:21:52.288474    5303 node_ready.go:57] node "addons-757691" has "Ready":"False" status (will retry)
	I1029 08:21:52.372495    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:52.372684    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:52.480728    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:52.716251    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:52.872375    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:52.872532    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:52.980381    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:53.217343    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:53.372254    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:53.372358    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:53.480199    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:53.717596    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:53.873497    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:53.873919    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:53.980453    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:54.216447    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:21:54.288556    5303 node_ready.go:57] node "addons-757691" has "Ready":"False" status (will retry)
	I1029 08:21:54.372852    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:54.372968    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:54.480339    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:54.716390    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:54.873093    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:54.873548    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:54.913696    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:54.981310    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:55.217849    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:55.373797    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:55.374313    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:55.480503    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:55.718127    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:21:55.722446    5303 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:55.722474    5303 retry.go:31] will retry after 4.142071292s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:55.872345    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:55.872746    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:55.980738    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:56.216674    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:21:56.288748    5303 node_ready.go:57] node "addons-757691" has "Ready":"False" status (will retry)
	I1029 08:21:56.373086    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:56.373384    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:56.480599    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:56.716338    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:56.873262    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:56.873601    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:56.980518    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:57.216918    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:57.372622    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:57.373316    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:57.480257    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:57.717570    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:57.873199    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:57.873354    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:57.980639    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:58.216541    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:58.372815    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:58.372920    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:58.479730    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:58.716868    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:21:58.788466    5303 node_ready.go:57] node "addons-757691" has "Ready":"False" status (will retry)
	I1029 08:21:58.872472    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:58.873297    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:58.980469    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:59.217164    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:59.373013    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:59.373369    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:59.480737    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:59.716769    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:59.865020    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:59.875061    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:59.875391    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:59.980594    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:00.223587    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:00.374314    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:00.375816    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:00.481290    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:00.718063    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:22:00.789782    5303 node_ready.go:57] node "addons-757691" has "Ready":"False" status (will retry)
	W1029 08:22:00.861582    5303 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:00.861659    5303 retry.go:31] will retry after 7.915106874s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:00.872691    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:00.873009    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:00.979594    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:01.217251    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:01.372292    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:01.372509    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:01.484230    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:01.718010    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:01.872275    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:01.872768    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:01.980721    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:02.216684    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:02.373178    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:02.373568    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:02.480583    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:02.716434    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:02.872454    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:02.872522    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:02.980694    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:03.216466    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:22:03.289016    5303 node_ready.go:57] node "addons-757691" has "Ready":"False" status (will retry)
	I1029 08:22:03.371778    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:03.372231    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:03.480282    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:03.717895    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:03.872253    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:03.872801    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:03.980745    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:04.216869    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:04.371988    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:04.371988    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:04.480252    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:04.717285    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:04.872451    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:04.872925    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:04.979592    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:05.217373    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:22:05.289215    5303 node_ready.go:57] node "addons-757691" has "Ready":"False" status (will retry)
	I1029 08:22:05.372648    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:05.373162    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:05.479908    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:05.717075    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:05.873324    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:05.873736    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:05.980489    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:06.216706    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:06.373404    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:06.373737    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:06.480364    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:06.717329    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:06.872377    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:06.872617    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:06.980263    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:07.217080    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:07.372376    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:07.372741    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:07.480459    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:07.717660    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:22:07.788268    5303 node_ready.go:57] node "addons-757691" has "Ready":"False" status (will retry)
	I1029 08:22:07.872497    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:07.873000    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:07.979879    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:08.216652    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:08.372523    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:08.373855    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:08.479738    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:08.716877    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:08.776903    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:22:08.873680    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:08.873813    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:08.979839    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:09.243024    5303 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1029 08:22:09.243096    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:09.306667    5303 node_ready.go:49] node "addons-757691" is "Ready"
	I1029 08:22:09.306694    5303 node_ready.go:38] duration metric: took 39.021274902s for node "addons-757691" to be "Ready" ...
	I1029 08:22:09.306708    5303 api_server.go:52] waiting for apiserver process to appear ...
	I1029 08:22:09.306767    5303 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:22:09.412687    5303 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1029 08:22:09.412707    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:09.413146    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:09.507523    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:09.716984    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:09.883750    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:09.884182    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:09.982813    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:10.217105    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:10.373765    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:10.373914    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:10.455247    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.67830316s)
	W1029 08:22:10.455281    5303 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:10.455301    5303 retry.go:31] will retry after 9.191478297s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:10.455338    5303 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.148561907s)
	I1029 08:22:10.455350    5303 api_server.go:72] duration metric: took 42.16982129s to wait for apiserver process to appear ...
	I1029 08:22:10.455355    5303 api_server.go:88] waiting for apiserver healthz status ...
	I1029 08:22:10.455369    5303 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1029 08:22:10.465707    5303 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1029 08:22:10.467042    5303 api_server.go:141] control plane version: v1.34.1
	I1029 08:22:10.467100    5303 api_server.go:131] duration metric: took 11.738543ms to wait for apiserver health ...
	I1029 08:22:10.467124    5303 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 08:22:10.472198    5303 system_pods.go:59] 19 kube-system pods found
	I1029 08:22:10.472303    5303 system_pods.go:61] "coredns-66bc5c9577-bzfbh" [1bc13dfd-dff8-4eeb-b155-3569e43ad89e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 08:22:10.472397    5303 system_pods.go:61] "csi-hostpath-attacher-0" [d705e0ea-40e4-437c-a6de-956ca2c6c06d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1029 08:22:10.472429    5303 system_pods.go:61] "csi-hostpath-resizer-0" [fbeb4843-be02-431c-b113-519d9e5b9b6e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1029 08:22:10.472458    5303 system_pods.go:61] "csi-hostpathplugin-gzlfm" [8c4dbdab-2138-4f48-8123-b62fb8422ba4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1029 08:22:10.472480    5303 system_pods.go:61] "etcd-addons-757691" [56450451-741d-4b95-83d5-0b9e6dd58bed] Running
	I1029 08:22:10.472510    5303 system_pods.go:61] "kindnet-v4rb6" [7e0ab1e9-1820-4994-8be3-469e9a30d7ed] Running
	I1029 08:22:10.472540    5303 system_pods.go:61] "kube-apiserver-addons-757691" [3c09f332-4421-49a6-9586-3ac3977d640d] Running
	I1029 08:22:10.472570    5303 system_pods.go:61] "kube-controller-manager-addons-757691" [dd239a9e-37f2-488b-9c13-a270541d20db] Running
	I1029 08:22:10.472601    5303 system_pods.go:61] "kube-ingress-dns-minikube" [8ece08bd-0f2b-4b7d-8456-43c0d10556d7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1029 08:22:10.472640    5303 system_pods.go:61] "kube-proxy-lfn78" [3f6c6b05-b806-4322-a980-c990d22d6a56] Running
	I1029 08:22:10.472662    5303 system_pods.go:61] "kube-scheduler-addons-757691" [2abe4b63-de35-4300-a3d3-25614e0fc123] Running
	I1029 08:22:10.472690    5303 system_pods.go:61] "metrics-server-85b7d694d7-2bwkc" [23336639-b3d3-4d15-a905-a3fcfe642ab9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1029 08:22:10.472728    5303 system_pods.go:61] "nvidia-device-plugin-daemonset-k472l" [a67db85b-cb6e-4585-82c5-297b38983141] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1029 08:22:10.472758    5303 system_pods.go:61] "registry-6b586f9694-rmhqh" [206ac621-1f76-46e0-a1fa-5072bef29b87] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1029 08:22:10.472783    5303 system_pods.go:61] "registry-creds-764b6fb674-7wrll" [dc216c07-bc5f-4a39-a59b-999712532cfd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1029 08:22:10.472812    5303 system_pods.go:61] "registry-proxy-wsh7n" [a1a89be0-a861-4f65-bbf5-bd788fa6a177] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1029 08:22:10.472845    5303 system_pods.go:61] "snapshot-controller-7d9fbc56b8-46nzh" [d23ac4dc-f9a8-4706-abf5-2844753f1855] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:22:10.472878    5303 system_pods.go:61] "snapshot-controller-7d9fbc56b8-n9z4k" [cd104357-7ad8-4942-a407-885f9de51e5b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:22:10.472907    5303 system_pods.go:61] "storage-provisioner" [9eb2d64d-e37a-4c83-9b28-e64155bbbbbf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 08:22:10.472935    5303 system_pods.go:74] duration metric: took 5.788237ms to wait for pod list to return data ...
	I1029 08:22:10.472977    5303 default_sa.go:34] waiting for default service account to be created ...
	I1029 08:22:10.484635    5303 default_sa.go:45] found service account: "default"
	I1029 08:22:10.484702    5303 default_sa.go:55] duration metric: took 11.705016ms for default service account to be created ...
	I1029 08:22:10.484735    5303 system_pods.go:116] waiting for k8s-apps to be running ...
	I1029 08:22:10.485401    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:10.507673    5303 system_pods.go:86] 19 kube-system pods found
	I1029 08:22:10.507761    5303 system_pods.go:89] "coredns-66bc5c9577-bzfbh" [1bc13dfd-dff8-4eeb-b155-3569e43ad89e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 08:22:10.507785    5303 system_pods.go:89] "csi-hostpath-attacher-0" [d705e0ea-40e4-437c-a6de-956ca2c6c06d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1029 08:22:10.507831    5303 system_pods.go:89] "csi-hostpath-resizer-0" [fbeb4843-be02-431c-b113-519d9e5b9b6e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1029 08:22:10.507863    5303 system_pods.go:89] "csi-hostpathplugin-gzlfm" [8c4dbdab-2138-4f48-8123-b62fb8422ba4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1029 08:22:10.507888    5303 system_pods.go:89] "etcd-addons-757691" [56450451-741d-4b95-83d5-0b9e6dd58bed] Running
	I1029 08:22:10.507914    5303 system_pods.go:89] "kindnet-v4rb6" [7e0ab1e9-1820-4994-8be3-469e9a30d7ed] Running
	I1029 08:22:10.507947    5303 system_pods.go:89] "kube-apiserver-addons-757691" [3c09f332-4421-49a6-9586-3ac3977d640d] Running
	I1029 08:22:10.507976    5303 system_pods.go:89] "kube-controller-manager-addons-757691" [dd239a9e-37f2-488b-9c13-a270541d20db] Running
	I1029 08:22:10.508003    5303 system_pods.go:89] "kube-ingress-dns-minikube" [8ece08bd-0f2b-4b7d-8456-43c0d10556d7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1029 08:22:10.508024    5303 system_pods.go:89] "kube-proxy-lfn78" [3f6c6b05-b806-4322-a980-c990d22d6a56] Running
	I1029 08:22:10.508058    5303 system_pods.go:89] "kube-scheduler-addons-757691" [2abe4b63-de35-4300-a3d3-25614e0fc123] Running
	I1029 08:22:10.508088    5303 system_pods.go:89] "metrics-server-85b7d694d7-2bwkc" [23336639-b3d3-4d15-a905-a3fcfe642ab9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1029 08:22:10.508117    5303 system_pods.go:89] "nvidia-device-plugin-daemonset-k472l" [a67db85b-cb6e-4585-82c5-297b38983141] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1029 08:22:10.508143    5303 system_pods.go:89] "registry-6b586f9694-rmhqh" [206ac621-1f76-46e0-a1fa-5072bef29b87] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1029 08:22:10.508174    5303 system_pods.go:89] "registry-creds-764b6fb674-7wrll" [dc216c07-bc5f-4a39-a59b-999712532cfd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1029 08:22:10.508199    5303 system_pods.go:89] "registry-proxy-wsh7n" [a1a89be0-a861-4f65-bbf5-bd788fa6a177] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1029 08:22:10.508225    5303 system_pods.go:89] "snapshot-controller-7d9fbc56b8-46nzh" [d23ac4dc-f9a8-4706-abf5-2844753f1855] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:22:10.508274    5303 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n9z4k" [cd104357-7ad8-4942-a407-885f9de51e5b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:22:10.508304    5303 system_pods.go:89] "storage-provisioner" [9eb2d64d-e37a-4c83-9b28-e64155bbbbbf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 08:22:10.508383    5303 retry.go:31] will retry after 220.584529ms: missing components: kube-dns
	I1029 08:22:10.717900    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:10.732974    5303 system_pods.go:86] 19 kube-system pods found
	I1029 08:22:10.733064    5303 system_pods.go:89] "coredns-66bc5c9577-bzfbh" [1bc13dfd-dff8-4eeb-b155-3569e43ad89e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 08:22:10.733089    5303 system_pods.go:89] "csi-hostpath-attacher-0" [d705e0ea-40e4-437c-a6de-956ca2c6c06d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1029 08:22:10.733131    5303 system_pods.go:89] "csi-hostpath-resizer-0" [fbeb4843-be02-431c-b113-519d9e5b9b6e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1029 08:22:10.733159    5303 system_pods.go:89] "csi-hostpathplugin-gzlfm" [8c4dbdab-2138-4f48-8123-b62fb8422ba4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1029 08:22:10.733185    5303 system_pods.go:89] "etcd-addons-757691" [56450451-741d-4b95-83d5-0b9e6dd58bed] Running
	I1029 08:22:10.733206    5303 system_pods.go:89] "kindnet-v4rb6" [7e0ab1e9-1820-4994-8be3-469e9a30d7ed] Running
	I1029 08:22:10.733238    5303 system_pods.go:89] "kube-apiserver-addons-757691" [3c09f332-4421-49a6-9586-3ac3977d640d] Running
	I1029 08:22:10.733264    5303 system_pods.go:89] "kube-controller-manager-addons-757691" [dd239a9e-37f2-488b-9c13-a270541d20db] Running
	I1029 08:22:10.733292    5303 system_pods.go:89] "kube-ingress-dns-minikube" [8ece08bd-0f2b-4b7d-8456-43c0d10556d7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1029 08:22:10.733321    5303 system_pods.go:89] "kube-proxy-lfn78" [3f6c6b05-b806-4322-a980-c990d22d6a56] Running
	I1029 08:22:10.733352    5303 system_pods.go:89] "kube-scheduler-addons-757691" [2abe4b63-de35-4300-a3d3-25614e0fc123] Running
	I1029 08:22:10.733378    5303 system_pods.go:89] "metrics-server-85b7d694d7-2bwkc" [23336639-b3d3-4d15-a905-a3fcfe642ab9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1029 08:22:10.733405    5303 system_pods.go:89] "nvidia-device-plugin-daemonset-k472l" [a67db85b-cb6e-4585-82c5-297b38983141] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1029 08:22:10.733432    5303 system_pods.go:89] "registry-6b586f9694-rmhqh" [206ac621-1f76-46e0-a1fa-5072bef29b87] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1029 08:22:10.733465    5303 system_pods.go:89] "registry-creds-764b6fb674-7wrll" [dc216c07-bc5f-4a39-a59b-999712532cfd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1029 08:22:10.733492    5303 system_pods.go:89] "registry-proxy-wsh7n" [a1a89be0-a861-4f65-bbf5-bd788fa6a177] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1029 08:22:10.733518    5303 system_pods.go:89] "snapshot-controller-7d9fbc56b8-46nzh" [d23ac4dc-f9a8-4706-abf5-2844753f1855] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:22:10.733549    5303 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n9z4k" [cd104357-7ad8-4942-a407-885f9de51e5b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:22:10.733580    5303 system_pods.go:89] "storage-provisioner" [9eb2d64d-e37a-4c83-9b28-e64155bbbbbf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 08:22:10.733616    5303 retry.go:31] will retry after 288.662598ms: missing components: kube-dns
	I1029 08:22:10.880324    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:10.880561    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:10.980723    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:11.027028    5303 system_pods.go:86] 19 kube-system pods found
	I1029 08:22:11.027066    5303 system_pods.go:89] "coredns-66bc5c9577-bzfbh" [1bc13dfd-dff8-4eeb-b155-3569e43ad89e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 08:22:11.027076    5303 system_pods.go:89] "csi-hostpath-attacher-0" [d705e0ea-40e4-437c-a6de-956ca2c6c06d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1029 08:22:11.027085    5303 system_pods.go:89] "csi-hostpath-resizer-0" [fbeb4843-be02-431c-b113-519d9e5b9b6e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1029 08:22:11.027092    5303 system_pods.go:89] "csi-hostpathplugin-gzlfm" [8c4dbdab-2138-4f48-8123-b62fb8422ba4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1029 08:22:11.027096    5303 system_pods.go:89] "etcd-addons-757691" [56450451-741d-4b95-83d5-0b9e6dd58bed] Running
	I1029 08:22:11.027102    5303 system_pods.go:89] "kindnet-v4rb6" [7e0ab1e9-1820-4994-8be3-469e9a30d7ed] Running
	I1029 08:22:11.027107    5303 system_pods.go:89] "kube-apiserver-addons-757691" [3c09f332-4421-49a6-9586-3ac3977d640d] Running
	I1029 08:22:11.027112    5303 system_pods.go:89] "kube-controller-manager-addons-757691" [dd239a9e-37f2-488b-9c13-a270541d20db] Running
	I1029 08:22:11.027121    5303 system_pods.go:89] "kube-ingress-dns-minikube" [8ece08bd-0f2b-4b7d-8456-43c0d10556d7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1029 08:22:11.027125    5303 system_pods.go:89] "kube-proxy-lfn78" [3f6c6b05-b806-4322-a980-c990d22d6a56] Running
	I1029 08:22:11.027131    5303 system_pods.go:89] "kube-scheduler-addons-757691" [2abe4b63-de35-4300-a3d3-25614e0fc123] Running
	I1029 08:22:11.027148    5303 system_pods.go:89] "metrics-server-85b7d694d7-2bwkc" [23336639-b3d3-4d15-a905-a3fcfe642ab9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1029 08:22:11.027157    5303 system_pods.go:89] "nvidia-device-plugin-daemonset-k472l" [a67db85b-cb6e-4585-82c5-297b38983141] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1029 08:22:11.027169    5303 system_pods.go:89] "registry-6b586f9694-rmhqh" [206ac621-1f76-46e0-a1fa-5072bef29b87] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1029 08:22:11.027176    5303 system_pods.go:89] "registry-creds-764b6fb674-7wrll" [dc216c07-bc5f-4a39-a59b-999712532cfd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1029 08:22:11.027182    5303 system_pods.go:89] "registry-proxy-wsh7n" [a1a89be0-a861-4f65-bbf5-bd788fa6a177] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1029 08:22:11.027191    5303 system_pods.go:89] "snapshot-controller-7d9fbc56b8-46nzh" [d23ac4dc-f9a8-4706-abf5-2844753f1855] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:22:11.027197    5303 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n9z4k" [cd104357-7ad8-4942-a407-885f9de51e5b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:22:11.027202    5303 system_pods.go:89] "storage-provisioner" [9eb2d64d-e37a-4c83-9b28-e64155bbbbbf] Running
	I1029 08:22:11.027220    5303 retry.go:31] will retry after 414.176369ms: missing components: kube-dns
	I1029 08:22:11.217979    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:11.373043    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:11.373277    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:11.447320    5303 system_pods.go:86] 19 kube-system pods found
	I1029 08:22:11.447358    5303 system_pods.go:89] "coredns-66bc5c9577-bzfbh" [1bc13dfd-dff8-4eeb-b155-3569e43ad89e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 08:22:11.447370    5303 system_pods.go:89] "csi-hostpath-attacher-0" [d705e0ea-40e4-437c-a6de-956ca2c6c06d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1029 08:22:11.447378    5303 system_pods.go:89] "csi-hostpath-resizer-0" [fbeb4843-be02-431c-b113-519d9e5b9b6e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1029 08:22:11.447384    5303 system_pods.go:89] "csi-hostpathplugin-gzlfm" [8c4dbdab-2138-4f48-8123-b62fb8422ba4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1029 08:22:11.447389    5303 system_pods.go:89] "etcd-addons-757691" [56450451-741d-4b95-83d5-0b9e6dd58bed] Running
	I1029 08:22:11.447394    5303 system_pods.go:89] "kindnet-v4rb6" [7e0ab1e9-1820-4994-8be3-469e9a30d7ed] Running
	I1029 08:22:11.447404    5303 system_pods.go:89] "kube-apiserver-addons-757691" [3c09f332-4421-49a6-9586-3ac3977d640d] Running
	I1029 08:22:11.447409    5303 system_pods.go:89] "kube-controller-manager-addons-757691" [dd239a9e-37f2-488b-9c13-a270541d20db] Running
	I1029 08:22:11.447425    5303 system_pods.go:89] "kube-ingress-dns-minikube" [8ece08bd-0f2b-4b7d-8456-43c0d10556d7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1029 08:22:11.447430    5303 system_pods.go:89] "kube-proxy-lfn78" [3f6c6b05-b806-4322-a980-c990d22d6a56] Running
	I1029 08:22:11.447440    5303 system_pods.go:89] "kube-scheduler-addons-757691" [2abe4b63-de35-4300-a3d3-25614e0fc123] Running
	I1029 08:22:11.447448    5303 system_pods.go:89] "metrics-server-85b7d694d7-2bwkc" [23336639-b3d3-4d15-a905-a3fcfe642ab9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1029 08:22:11.447454    5303 system_pods.go:89] "nvidia-device-plugin-daemonset-k472l" [a67db85b-cb6e-4585-82c5-297b38983141] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1029 08:22:11.447468    5303 system_pods.go:89] "registry-6b586f9694-rmhqh" [206ac621-1f76-46e0-a1fa-5072bef29b87] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1029 08:22:11.447478    5303 system_pods.go:89] "registry-creds-764b6fb674-7wrll" [dc216c07-bc5f-4a39-a59b-999712532cfd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1029 08:22:11.447484    5303 system_pods.go:89] "registry-proxy-wsh7n" [a1a89be0-a861-4f65-bbf5-bd788fa6a177] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1029 08:22:11.447493    5303 system_pods.go:89] "snapshot-controller-7d9fbc56b8-46nzh" [d23ac4dc-f9a8-4706-abf5-2844753f1855] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:22:11.447502    5303 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n9z4k" [cd104357-7ad8-4942-a407-885f9de51e5b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:22:11.447510    5303 system_pods.go:89] "storage-provisioner" [9eb2d64d-e37a-4c83-9b28-e64155bbbbbf] Running
	I1029 08:22:11.447525    5303 retry.go:31] will retry after 417.054385ms: missing components: kube-dns
	I1029 08:22:11.481306    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:11.718836    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:11.873877    5303 system_pods.go:86] 19 kube-system pods found
	I1029 08:22:11.873924    5303 system_pods.go:89] "coredns-66bc5c9577-bzfbh" [1bc13dfd-dff8-4eeb-b155-3569e43ad89e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 08:22:11.873935    5303 system_pods.go:89] "csi-hostpath-attacher-0" [d705e0ea-40e4-437c-a6de-956ca2c6c06d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1029 08:22:11.873949    5303 system_pods.go:89] "csi-hostpath-resizer-0" [fbeb4843-be02-431c-b113-519d9e5b9b6e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1029 08:22:11.873958    5303 system_pods.go:89] "csi-hostpathplugin-gzlfm" [8c4dbdab-2138-4f48-8123-b62fb8422ba4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1029 08:22:11.873963    5303 system_pods.go:89] "etcd-addons-757691" [56450451-741d-4b95-83d5-0b9e6dd58bed] Running
	I1029 08:22:11.873970    5303 system_pods.go:89] "kindnet-v4rb6" [7e0ab1e9-1820-4994-8be3-469e9a30d7ed] Running
	I1029 08:22:11.873985    5303 system_pods.go:89] "kube-apiserver-addons-757691" [3c09f332-4421-49a6-9586-3ac3977d640d] Running
	I1029 08:22:11.873994    5303 system_pods.go:89] "kube-controller-manager-addons-757691" [dd239a9e-37f2-488b-9c13-a270541d20db] Running
	I1029 08:22:11.874004    5303 system_pods.go:89] "kube-ingress-dns-minikube" [8ece08bd-0f2b-4b7d-8456-43c0d10556d7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1029 08:22:11.874017    5303 system_pods.go:89] "kube-proxy-lfn78" [3f6c6b05-b806-4322-a980-c990d22d6a56] Running
	I1029 08:22:11.874026    5303 system_pods.go:89] "kube-scheduler-addons-757691" [2abe4b63-de35-4300-a3d3-25614e0fc123] Running
	I1029 08:22:11.874033    5303 system_pods.go:89] "metrics-server-85b7d694d7-2bwkc" [23336639-b3d3-4d15-a905-a3fcfe642ab9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1029 08:22:11.874064    5303 system_pods.go:89] "nvidia-device-plugin-daemonset-k472l" [a67db85b-cb6e-4585-82c5-297b38983141] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1029 08:22:11.874076    5303 system_pods.go:89] "registry-6b586f9694-rmhqh" [206ac621-1f76-46e0-a1fa-5072bef29b87] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1029 08:22:11.874084    5303 system_pods.go:89] "registry-creds-764b6fb674-7wrll" [dc216c07-bc5f-4a39-a59b-999712532cfd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1029 08:22:11.874098    5303 system_pods.go:89] "registry-proxy-wsh7n" [a1a89be0-a861-4f65-bbf5-bd788fa6a177] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1029 08:22:11.874108    5303 system_pods.go:89] "snapshot-controller-7d9fbc56b8-46nzh" [d23ac4dc-f9a8-4706-abf5-2844753f1855] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:22:11.874119    5303 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n9z4k" [cd104357-7ad8-4942-a407-885f9de51e5b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:22:11.874128    5303 system_pods.go:89] "storage-provisioner" [9eb2d64d-e37a-4c83-9b28-e64155bbbbbf] Running
	I1029 08:22:11.874144    5303 retry.go:31] will retry after 458.682438ms: missing components: kube-dns
	I1029 08:22:11.877549    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:11.882839    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:11.987237    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:12.218049    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:12.336951    5303 system_pods.go:86] 19 kube-system pods found
	I1029 08:22:12.336987    5303 system_pods.go:89] "coredns-66bc5c9577-bzfbh" [1bc13dfd-dff8-4eeb-b155-3569e43ad89e] Running
	I1029 08:22:12.336999    5303 system_pods.go:89] "csi-hostpath-attacher-0" [d705e0ea-40e4-437c-a6de-956ca2c6c06d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1029 08:22:12.337007    5303 system_pods.go:89] "csi-hostpath-resizer-0" [fbeb4843-be02-431c-b113-519d9e5b9b6e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1029 08:22:12.337015    5303 system_pods.go:89] "csi-hostpathplugin-gzlfm" [8c4dbdab-2138-4f48-8123-b62fb8422ba4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1029 08:22:12.337022    5303 system_pods.go:89] "etcd-addons-757691" [56450451-741d-4b95-83d5-0b9e6dd58bed] Running
	I1029 08:22:12.337028    5303 system_pods.go:89] "kindnet-v4rb6" [7e0ab1e9-1820-4994-8be3-469e9a30d7ed] Running
	I1029 08:22:12.337034    5303 system_pods.go:89] "kube-apiserver-addons-757691" [3c09f332-4421-49a6-9586-3ac3977d640d] Running
	I1029 08:22:12.337038    5303 system_pods.go:89] "kube-controller-manager-addons-757691" [dd239a9e-37f2-488b-9c13-a270541d20db] Running
	I1029 08:22:12.337052    5303 system_pods.go:89] "kube-ingress-dns-minikube" [8ece08bd-0f2b-4b7d-8456-43c0d10556d7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1029 08:22:12.337061    5303 system_pods.go:89] "kube-proxy-lfn78" [3f6c6b05-b806-4322-a980-c990d22d6a56] Running
	I1029 08:22:12.337067    5303 system_pods.go:89] "kube-scheduler-addons-757691" [2abe4b63-de35-4300-a3d3-25614e0fc123] Running
	I1029 08:22:12.337081    5303 system_pods.go:89] "metrics-server-85b7d694d7-2bwkc" [23336639-b3d3-4d15-a905-a3fcfe642ab9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1029 08:22:12.337088    5303 system_pods.go:89] "nvidia-device-plugin-daemonset-k472l" [a67db85b-cb6e-4585-82c5-297b38983141] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1029 08:22:12.337097    5303 system_pods.go:89] "registry-6b586f9694-rmhqh" [206ac621-1f76-46e0-a1fa-5072bef29b87] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1029 08:22:12.337106    5303 system_pods.go:89] "registry-creds-764b6fb674-7wrll" [dc216c07-bc5f-4a39-a59b-999712532cfd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1029 08:22:12.337115    5303 system_pods.go:89] "registry-proxy-wsh7n" [a1a89be0-a861-4f65-bbf5-bd788fa6a177] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1029 08:22:12.337121    5303 system_pods.go:89] "snapshot-controller-7d9fbc56b8-46nzh" [d23ac4dc-f9a8-4706-abf5-2844753f1855] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:22:12.337128    5303 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n9z4k" [cd104357-7ad8-4942-a407-885f9de51e5b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:22:12.337132    5303 system_pods.go:89] "storage-provisioner" [9eb2d64d-e37a-4c83-9b28-e64155bbbbbf] Running
	I1029 08:22:12.337142    5303 system_pods.go:126] duration metric: took 1.852389453s to wait for k8s-apps to be running ...
	I1029 08:22:12.337156    5303 system_svc.go:44] waiting for kubelet service to be running ....
	I1029 08:22:12.337213    5303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 08:22:12.350472    5303 system_svc.go:56] duration metric: took 13.30884ms WaitForService to wait for kubelet
	I1029 08:22:12.350500    5303 kubeadm.go:587] duration metric: took 44.064969054s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 08:22:12.350529    5303 node_conditions.go:102] verifying NodePressure condition ...
	I1029 08:22:12.353529    5303 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1029 08:22:12.353562    5303 node_conditions.go:123] node cpu capacity is 2
	I1029 08:22:12.353575    5303 node_conditions.go:105] duration metric: took 3.040401ms to run NodePressure ...
	I1029 08:22:12.353587    5303 start.go:242] waiting for startup goroutines ...
	I1029 08:22:12.372803    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:12.372975    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:12.480099    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:12.717721    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:12.874765    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:12.875128    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:12.984624    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:13.217950    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:13.373928    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:13.374369    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:13.480390    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:13.716910    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:13.872154    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:13.872354    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:13.981098    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:14.217509    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:14.373504    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:14.373585    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:14.480470    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:14.716657    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:14.873615    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:14.873974    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:14.979537    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:15.217987    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:15.374715    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:15.375127    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:15.481620    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:15.718249    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:15.874325    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:15.874799    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:15.981126    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:16.230302    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:16.378982    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:16.379492    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:16.483375    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:16.722933    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:16.876760    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:16.876849    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:16.990827    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:17.219054    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:17.373026    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:17.373510    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:17.480752    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:17.721490    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:17.874647    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:17.874792    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:17.979817    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:18.217674    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:18.374293    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:18.374429    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:18.480567    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:18.717755    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:18.874137    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:18.874728    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:18.979820    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:19.217111    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:19.373050    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:19.373192    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:19.480164    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:19.647538    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:22:19.717267    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:19.874072    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:19.874661    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:19.981049    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:20.217583    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:20.373344    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:20.373509    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:20.480581    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:20.718384    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:20.754528    5303 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.106953221s)
	W1029 08:22:20.754560    5303 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:20.754579    5303 retry.go:31] will retry after 27.842036107s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:20.873383    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:20.873581    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:20.980903    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:21.217601    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:21.374637    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:21.375025    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:21.480291    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:21.717914    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:21.874364    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:21.874524    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:21.980581    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:22.217289    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:22.374262    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:22.374860    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:22.479778    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:22.716672    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:22.873594    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:22.873725    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:22.979487    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:23.216798    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:23.373845    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:23.373968    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:23.479519    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:23.716694    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:23.874369    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:23.874449    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:23.980819    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:24.217520    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:24.374547    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:24.375049    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:24.481610    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:24.716520    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:24.873214    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:24.873633    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:24.980386    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:25.217220    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:25.381262    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:25.381700    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:25.480812    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:25.722165    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:25.876514    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:25.876648    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:25.992673    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:26.217439    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:26.376332    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:26.376766    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:26.481389    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:26.718247    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:26.875165    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:26.875624    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:26.981610    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:27.225712    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:27.373911    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:27.374043    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:27.479973    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:27.718330    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:27.873870    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:27.874241    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:27.980465    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:28.218095    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:28.375279    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:28.376717    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:28.480974    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:28.718273    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:28.874782    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:28.875278    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:28.980919    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:29.218200    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:29.379845    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:29.380363    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:29.481452    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:29.718078    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:29.872884    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:29.873006    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:29.979953    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:30.217385    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:30.373966    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:30.374083    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:30.480294    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:30.717816    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:30.873646    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:30.873740    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:30.980755    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:31.217218    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:31.373622    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:31.373735    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:31.480834    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:31.717696    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:31.873047    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:31.873304    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:31.980714    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:32.217212    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:32.373464    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:32.373788    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:32.480290    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:32.717546    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:32.873978    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:32.874091    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:32.980277    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:33.217174    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:33.372573    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:33.372754    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:33.480831    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:33.717017    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:33.873281    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:33.873704    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:33.981227    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:34.217775    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:34.373676    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:34.373762    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:34.481691    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:34.717785    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:34.873604    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:34.873784    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:34.981222    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:35.219959    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:35.377315    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:35.377419    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:35.480447    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:35.717239    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:35.874753    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:35.875084    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:35.980184    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:36.218388    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:36.375967    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:36.377644    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:36.480878    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:36.717457    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:36.873760    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:36.874183    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:36.980457    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:37.223873    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:37.373072    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:37.373240    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:37.480020    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:37.717313    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:37.873351    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:37.874087    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:37.979895    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:38.217252    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:38.373438    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:38.373694    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:38.481621    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:38.717468    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:38.873356    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:38.873579    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:38.980541    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:39.221214    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:39.373018    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:39.373190    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:39.480591    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:39.718021    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:39.873419    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:39.873796    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:39.981816    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:40.223895    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:40.373853    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:40.374376    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:40.480898    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:40.718115    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:40.873357    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:40.874042    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:40.982505    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:41.217271    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:41.373003    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:41.373319    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:41.480144    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:41.718863    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:41.875145    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:41.875449    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:41.980471    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:42.220801    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:42.374260    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:42.374997    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:42.480956    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:42.717655    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:42.874800    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:42.876263    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:42.981328    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:43.217007    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:43.373857    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:43.374290    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:43.480620    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:43.716972    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:43.874256    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:43.875063    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:43.981475    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:44.218092    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:44.374209    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:44.374598    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:44.480445    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:44.718193    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:44.873218    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:44.873402    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:44.980788    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:45.218123    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:45.377266    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:45.377502    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:45.481035    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:45.717783    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:45.873926    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:45.874846    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:45.980474    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:46.217386    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:46.373463    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:46.373889    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:46.485812    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:46.717148    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:46.872917    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:46.873730    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:46.979831    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:47.217168    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:47.373745    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:47.374161    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:47.480494    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:47.718872    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:47.873849    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:47.873990    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:47.979715    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:48.218087    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:48.373794    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:48.374147    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:48.480718    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:48.597590    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:22:48.717470    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:48.873409    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:48.874525    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:48.981277    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:49.217945    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:49.374143    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:49.374561    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:49.480539    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1029 08:22:49.578617    5303 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:49.578649    5303 retry.go:31] will retry after 19.709762938s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:49.717108    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:49.874048    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:49.874215    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:49.980697    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:50.217149    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:50.374673    5303 kapi.go:107] duration metric: took 1m16.005739028s to wait for kubernetes.io/minikube-addons=registry ...
	I1029 08:22:50.375195    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:50.481055    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:50.718473    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:50.875301    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:50.980277    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:51.219727    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:51.373107    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:51.479848    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:51.718388    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:51.872528    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:51.980549    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:52.216545    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:52.373035    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:52.480925    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:52.718418    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:52.877163    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:52.981001    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:53.220463    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:53.372992    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:53.481084    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:53.717424    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:53.872627    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:53.980994    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:54.220532    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:54.380502    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:54.481949    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:54.727948    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:54.874743    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:54.996303    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:55.218074    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:55.373480    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:55.480036    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:55.717625    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:55.873448    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:55.980411    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:56.217649    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:56.377460    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:56.480467    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:56.717328    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:56.872383    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:56.980382    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:57.216774    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:57.372731    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:57.479697    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:57.717076    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:57.872082    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:57.979932    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:58.217721    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:58.373689    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:58.481150    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:58.719042    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:58.872277    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:58.979662    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:59.217145    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:59.372392    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:59.480274    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:59.717877    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:59.872273    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:59.980810    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:00.218204    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:00.373223    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:00.481114    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:00.718112    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:00.871821    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:00.979636    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:01.220170    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:01.378952    5303 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:01.481178    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:01.718023    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:01.872409    5303 kapi.go:107] duration metric: took 1m27.503418804s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1029 08:23:01.980038    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:02.217846    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:02.481401    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:02.720706    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:02.980978    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:03.217145    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:03.480479    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:03.717151    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:03.980434    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:04.217638    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:04.480761    5303 kapi.go:107] duration metric: took 1m26.503920246s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1029 08:23:04.483742    5303 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-757691 cluster.
	I1029 08:23:04.487506    5303 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1029 08:23:04.490569    5303 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1029 08:23:04.717294    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:05.218712    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:05.717667    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:06.218731    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:06.718520    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:07.219285    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:07.717677    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:08.217454    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:08.717604    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:09.217272    5303 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:09.289616    5303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:23:09.717684    5303 kapi.go:107] duration metric: took 1m35.004078754s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W1029 08:23:10.200042    5303 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 08:23:10.200139    5303 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1029 08:23:10.203438    5303 out.go:179] * Enabled addons: registry-creds, storage-provisioner-rancher, nvidia-device-plugin, cloud-spanner, ingress-dns, amd-gpu-device-plugin, storage-provisioner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1029 08:23:10.207160    5303 addons.go:515] duration metric: took 1m41.920285412s for enable addons: enabled=[registry-creds storage-provisioner-rancher nvidia-device-plugin cloud-spanner ingress-dns amd-gpu-device-plugin storage-provisioner metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1029 08:23:10.207214    5303 start.go:247] waiting for cluster config update ...
	I1029 08:23:10.207239    5303 start.go:256] writing updated cluster config ...
	I1029 08:23:10.209388    5303 ssh_runner.go:195] Run: rm -f paused
	I1029 08:23:10.214355    5303 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 08:23:10.218131    5303 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bzfbh" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:10.223838    5303 pod_ready.go:94] pod "coredns-66bc5c9577-bzfbh" is "Ready"
	I1029 08:23:10.223879    5303 pod_ready.go:86] duration metric: took 5.72535ms for pod "coredns-66bc5c9577-bzfbh" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:10.226114    5303 pod_ready.go:83] waiting for pod "etcd-addons-757691" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:10.230538    5303 pod_ready.go:94] pod "etcd-addons-757691" is "Ready"
	I1029 08:23:10.230567    5303 pod_ready.go:86] duration metric: took 4.423938ms for pod "etcd-addons-757691" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:10.233189    5303 pod_ready.go:83] waiting for pod "kube-apiserver-addons-757691" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:10.238247    5303 pod_ready.go:94] pod "kube-apiserver-addons-757691" is "Ready"
	I1029 08:23:10.238274    5303 pod_ready.go:86] duration metric: took 5.060088ms for pod "kube-apiserver-addons-757691" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:10.240864    5303 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-757691" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:10.618818    5303 pod_ready.go:94] pod "kube-controller-manager-addons-757691" is "Ready"
	I1029 08:23:10.618851    5303 pod_ready.go:86] duration metric: took 377.95873ms for pod "kube-controller-manager-addons-757691" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:10.818544    5303 pod_ready.go:83] waiting for pod "kube-proxy-lfn78" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:11.218482    5303 pod_ready.go:94] pod "kube-proxy-lfn78" is "Ready"
	I1029 08:23:11.218514    5303 pod_ready.go:86] duration metric: took 399.940401ms for pod "kube-proxy-lfn78" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:11.419051    5303 pod_ready.go:83] waiting for pod "kube-scheduler-addons-757691" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:11.818590    5303 pod_ready.go:94] pod "kube-scheduler-addons-757691" is "Ready"
	I1029 08:23:11.818618    5303 pod_ready.go:86] duration metric: took 399.539059ms for pod "kube-scheduler-addons-757691" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:11.818633    5303 pod_ready.go:40] duration metric: took 1.604244151s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 08:23:12.241372    5303 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1029 08:23:12.244697    5303 out.go:179] * Done! kubectl is now configured to use "addons-757691" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 29 08:23:13 addons-757691 crio[833]: time="2025-10-29T08:23:13.625978584Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:16cb5ae1a1f489865d65b30b5a6ffd947734326c73781b1ea4df788bb1f95238 UID:3136dfda-447e-4351-bffc-ab9f47a42a8b NetNS:/var/run/netns/8813a454-e3f0-4133-86e7-483a42afd936 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d4a0}] Aliases:map[]}"
	Oct 29 08:23:13 addons-757691 crio[833]: time="2025-10-29T08:23:13.626125753Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 29 08:23:13 addons-757691 crio[833]: time="2025-10-29T08:23:13.630146643Z" level=info msg="Ran pod sandbox 16cb5ae1a1f489865d65b30b5a6ffd947734326c73781b1ea4df788bb1f95238 with infra container: default/busybox/POD" id=c422bab6-4572-482d-b57c-c64ce5192d7d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 08:23:13 addons-757691 crio[833]: time="2025-10-29T08:23:13.633428038Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9dd44d65-5663-45a4-893e-4b43e067f372 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 08:23:13 addons-757691 crio[833]: time="2025-10-29T08:23:13.633696857Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=9dd44d65-5663-45a4-893e-4b43e067f372 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 08:23:13 addons-757691 crio[833]: time="2025-10-29T08:23:13.633812805Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=9dd44d65-5663-45a4-893e-4b43e067f372 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 08:23:13 addons-757691 crio[833]: time="2025-10-29T08:23:13.634515803Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7237bec9-fd2d-447c-85ef-e8ddcadb5491 name=/runtime.v1.ImageService/PullImage
	Oct 29 08:23:13 addons-757691 crio[833]: time="2025-10-29T08:23:13.640047617Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 29 08:23:15 addons-757691 crio[833]: time="2025-10-29T08:23:15.608403696Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=7237bec9-fd2d-447c-85ef-e8ddcadb5491 name=/runtime.v1.ImageService/PullImage
	Oct 29 08:23:15 addons-757691 crio[833]: time="2025-10-29T08:23:15.608985864Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c1d08de7-a9a0-4ed7-bec5-dc9bb84bfda7 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 08:23:15 addons-757691 crio[833]: time="2025-10-29T08:23:15.610421703Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=68169a27-f781-4d0e-8e01-9092d16437a7 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 08:23:15 addons-757691 crio[833]: time="2025-10-29T08:23:15.617745034Z" level=info msg="Creating container: default/busybox/busybox" id=231774ab-5f4f-4a20-9ca2-754d59d00476 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 08:23:15 addons-757691 crio[833]: time="2025-10-29T08:23:15.617871616Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 08:23:15 addons-757691 crio[833]: time="2025-10-29T08:23:15.624451646Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 08:23:15 addons-757691 crio[833]: time="2025-10-29T08:23:15.625138783Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 08:23:15 addons-757691 crio[833]: time="2025-10-29T08:23:15.641370128Z" level=info msg="Created container 988babaa55e15b39356f7edcb620e41b3452095e5fb55caddc0b6cde51f5e918: default/busybox/busybox" id=231774ab-5f4f-4a20-9ca2-754d59d00476 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 08:23:15 addons-757691 crio[833]: time="2025-10-29T08:23:15.64254632Z" level=info msg="Starting container: 988babaa55e15b39356f7edcb620e41b3452095e5fb55caddc0b6cde51f5e918" id=2edcfb5e-7b15-4e9f-9f43-b1b9f36da3c7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 08:23:15 addons-757691 crio[833]: time="2025-10-29T08:23:15.645245088Z" level=info msg="Started container" PID=5068 containerID=988babaa55e15b39356f7edcb620e41b3452095e5fb55caddc0b6cde51f5e918 description=default/busybox/busybox id=2edcfb5e-7b15-4e9f-9f43-b1b9f36da3c7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=16cb5ae1a1f489865d65b30b5a6ffd947734326c73781b1ea4df788bb1f95238
	Oct 29 08:23:22 addons-757691 crio[833]: time="2025-10-29T08:23:22.591567477Z" level=info msg="Removing container: 0fac6054e444479169b91fb479bb04a3b40907ebcf5fb3a0d5a2b999e968a09e" id=d383e893-6a77-4efb-b72f-7da2bbf12a5c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 08:23:22 addons-757691 crio[833]: time="2025-10-29T08:23:22.593872304Z" level=info msg="Error loading conmon cgroup of container 0fac6054e444479169b91fb479bb04a3b40907ebcf5fb3a0d5a2b999e968a09e: cgroup deleted" id=d383e893-6a77-4efb-b72f-7da2bbf12a5c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 08:23:22 addons-757691 crio[833]: time="2025-10-29T08:23:22.598272726Z" level=info msg="Removed container 0fac6054e444479169b91fb479bb04a3b40907ebcf5fb3a0d5a2b999e968a09e: gcp-auth/gcp-auth-certs-create-l8prh/create" id=d383e893-6a77-4efb-b72f-7da2bbf12a5c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 08:23:22 addons-757691 crio[833]: time="2025-10-29T08:23:22.601043306Z" level=info msg="Stopping pod sandbox: 3fef7dee7bab7e0f34dd6bcede85a47eed68bf604e4305f0ec82cd046cc9dabd" id=29b7f821-a3d6-4c4f-9b7d-594366de57cb name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 29 08:23:22 addons-757691 crio[833]: time="2025-10-29T08:23:22.601116997Z" level=info msg="Stopped pod sandbox (already stopped): 3fef7dee7bab7e0f34dd6bcede85a47eed68bf604e4305f0ec82cd046cc9dabd" id=29b7f821-a3d6-4c4f-9b7d-594366de57cb name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 29 08:23:22 addons-757691 crio[833]: time="2025-10-29T08:23:22.601583111Z" level=info msg="Removing pod sandbox: 3fef7dee7bab7e0f34dd6bcede85a47eed68bf604e4305f0ec82cd046cc9dabd" id=ec87c21c-2eec-4793-83c8-582196b3ac1d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 29 08:23:22 addons-757691 crio[833]: time="2025-10-29T08:23:22.606609861Z" level=info msg="Removed pod sandbox: 3fef7dee7bab7e0f34dd6bcede85a47eed68bf604e4305f0ec82cd046cc9dabd" id=ec87c21c-2eec-4793-83c8-582196b3ac1d name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	988babaa55e15       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          10 seconds ago       Running             busybox                                  0                   16cb5ae1a1f48       busybox                                     default
	ee8944794e805       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          16 seconds ago       Running             csi-snapshotter                          0                   16207fea7d35e       csi-hostpathplugin-gzlfm                    kube-system
	32f7a28d2d03b       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          17 seconds ago       Running             csi-provisioner                          0                   16207fea7d35e       csi-hostpathplugin-gzlfm                    kube-system
	239ec534461a0       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            19 seconds ago       Running             liveness-probe                           0                   16207fea7d35e       csi-hostpathplugin-gzlfm                    kube-system
	0555333eb38f5       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           20 seconds ago       Running             hostpath                                 0                   16207fea7d35e       csi-hostpathplugin-gzlfm                    kube-system
	69ba1f444f956       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 21 seconds ago       Running             gcp-auth                                 0                   7a6e958924609       gcp-auth-78565c9fb4-7c65l                   gcp-auth
	86329b8c65996       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             24 seconds ago       Running             controller                               0                   3ab16108b2714       ingress-nginx-controller-675c5ddd98-8xwgl   ingress-nginx
	b7ebb9338f4b7       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                31 seconds ago       Running             node-driver-registrar                    0                   16207fea7d35e       csi-hostpathplugin-gzlfm                    kube-system
	26f10b73cd601       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            32 seconds ago       Running             gadget                                   0                   b6d229a38598d       gadget-lfsrs                                gadget
	41792cfd96315       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             32 seconds ago       Exited              patch                                    3                   0ca7b39cb07e0       gcp-auth-certs-patch-fvsg5                  gcp-auth
	861cd9d17d1a2       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              36 seconds ago       Running             registry-proxy                           0                   482babb3e37e3       registry-proxy-wsh7n                        kube-system
	4f38205b7fd4d       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     39 seconds ago       Running             nvidia-device-plugin-ctr                 0                   7f4a810201735       nvidia-device-plugin-daemonset-k472l        kube-system
	080445adfb273       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   44 seconds ago       Running             csi-external-health-monitor-controller   0                   16207fea7d35e       csi-hostpathplugin-gzlfm                    kube-system
	525382941facb       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      46 seconds ago       Running             volume-snapshot-controller               0                   d15fd5a5e902d       snapshot-controller-7d9fbc56b8-46nzh        kube-system
	5d17cf36f0fb1       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              46 seconds ago       Running             yakd                                     0                   cc42b9677080a       yakd-dashboard-5ff678cb9-z7rr6              yakd-dashboard
	0d2bb5596e6b3       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             50 seconds ago       Running             local-path-provisioner                   0                   9f540b11d4510       local-path-provisioner-648f6765c9-t42xh     local-path-storage
	c8fe768126de3       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           51 seconds ago       Running             registry                                 0                   8192fa2dad007       registry-6b586f9694-rmhqh                   kube-system
	444ef3af30aeb       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              52 seconds ago       Running             csi-resizer                              0                   b1dce3800ca5f       csi-hostpath-resizer-0                      kube-system
	0de41296f0ad8       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               54 seconds ago       Running             cloud-spanner-emulator                   0                   e912b533b760e       cloud-spanner-emulator-86bd5cbb97-ddvrf     default
	53ee1a72ac2ca       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             58 seconds ago       Exited              patch                                    1                   8abac4834b62a       ingress-nginx-admission-patch-gtc6l         ingress-nginx
	11d81ea66afbd       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   59 seconds ago       Exited              create                                   0                   dfa8291cdeb8c       ingress-nginx-admission-create-6btnm        ingress-nginx
	a89be2ad8c3cb       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   178d76584b528       kube-ingress-dns-minikube                   kube-system
	03254ae94d330       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   f941bbdcc5a20       snapshot-controller-7d9fbc56b8-n9z4k        kube-system
	380a55eebf3cd       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   47326e5e80fde       metrics-server-85b7d694d7-2bwkc             kube-system
	dbc66dc27a615       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   fde2a1df08991       csi-hostpath-attacher-0                     kube-system
	561fd8a760135       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   1e96f9f9d3ba9       coredns-66bc5c9577-bzfbh                    kube-system
	bc4be5a012bc9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   cbc4dc0907694       storage-provisioner                         kube-system
	fb05a0521754d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             About a minute ago   Running             kube-proxy                               0                   1883c7a31064a       kube-proxy-lfn78                            kube-system
	bdb041cabd34f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             About a minute ago   Running             kindnet-cni                              0                   782f17f840015       kindnet-v4rb6                               kube-system
	349c9103101d7       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   13a6662a15be1       kube-controller-manager-addons-757691       kube-system
	6fb3b53c30069       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   ea6b0219bdeeb       kube-scheduler-addons-757691                kube-system
	df417919fab6f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   9a36fb41a142f       kube-apiserver-addons-757691                kube-system
	2a94afd232256       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   55efdf76794cb       etcd-addons-757691                          kube-system
	
	
	==> coredns [561fd8a7601359c5c1ac06320b6c023314bf2d9c888338eb6db0cb74cf760ad6] <==
	[INFO] 10.244.0.15:34000 - 6909 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000102451s
	[INFO] 10.244.0.15:34000 - 11728 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002735733s
	[INFO] 10.244.0.15:34000 - 26741 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002406851s
	[INFO] 10.244.0.15:34000 - 29688 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000139431s
	[INFO] 10.244.0.15:34000 - 21376 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000147013s
	[INFO] 10.244.0.15:44602 - 2698 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000244318s
	[INFO] 10.244.0.15:44602 - 2936 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000171735s
	[INFO] 10.244.0.15:42075 - 33231 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000104189s
	[INFO] 10.244.0.15:42075 - 33436 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000100194s
	[INFO] 10.244.0.15:50953 - 48969 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000237803s
	[INFO] 10.244.0.15:50953 - 48780 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000255798s
	[INFO] 10.244.0.15:55147 - 23590 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001922686s
	[INFO] 10.244.0.15:55147 - 24035 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001169866s
	[INFO] 10.244.0.15:50947 - 9710 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000134434s
	[INFO] 10.244.0.15:50947 - 9314 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000080033s
	[INFO] 10.244.0.21:37427 - 23942 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000189466s
	[INFO] 10.244.0.21:42923 - 1160 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000105305s
	[INFO] 10.244.0.21:60789 - 38809 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000167838s
	[INFO] 10.244.0.21:55041 - 23874 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000098938s
	[INFO] 10.244.0.21:38291 - 65129 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000121191s
	[INFO] 10.244.0.21:39590 - 44302 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000124113s
	[INFO] 10.244.0.21:59908 - 12824 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00634966s
	[INFO] 10.244.0.21:44784 - 62183 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003179035s
	[INFO] 10.244.0.21:46494 - 39593 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002051771s
	[INFO] 10.244.0.21:55384 - 52854 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002161048s
	
	
	==> describe nodes <==
	Name:               addons-757691
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-757691
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=addons-757691
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T08_21_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-757691
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-757691"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 08:21:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-757691
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 08:23:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 08:23:04 +0000   Wed, 29 Oct 2025 08:21:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 08:23:04 +0000   Wed, 29 Oct 2025 08:21:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 08:23:04 +0000   Wed, 29 Oct 2025 08:21:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 08:23:04 +0000   Wed, 29 Oct 2025 08:22:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-757691
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                b8735395-3669-4c20-84a8-3e15bb7194b2
	  Boot ID:                    dcb5a4aa-84b0-4edf-aeb7-a96ccf6c0882
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  default                     cloud-spanner-emulator-86bd5cbb97-ddvrf      0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  gadget                      gadget-lfsrs                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  gcp-auth                    gcp-auth-78565c9fb4-7c65l                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-8xwgl    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         111s
	  kube-system                 coredns-66bc5c9577-bzfbh                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     117s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 csi-hostpathplugin-gzlfm                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 etcd-addons-757691                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m3s
	  kube-system                 kindnet-v4rb6                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      118s
	  kube-system                 kube-apiserver-addons-757691                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-addons-757691        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-lfn78                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-scheduler-addons-757691                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 metrics-server-85b7d694d7-2bwkc              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         112s
	  kube-system                 nvidia-device-plugin-daemonset-k472l         0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 registry-6b586f9694-rmhqh                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 registry-creds-764b6fb674-7wrll              0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 registry-proxy-wsh7n                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 snapshot-controller-7d9fbc56b8-46nzh         0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 snapshot-controller-7d9fbc56b8-n9z4k         0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  local-path-storage          local-path-provisioner-648f6765c9-t42xh      0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-z7rr6               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     112s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 117s  kube-proxy       
	  Normal   Starting                 2m3s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m3s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m3s  kubelet          Node addons-757691 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m3s  kubelet          Node addons-757691 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m3s  kubelet          Node addons-757691 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           119s  node-controller  Node addons-757691 event: Registered Node addons-757691 in Controller
	  Normal   NodeReady                77s   kubelet          Node addons-757691 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct29 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014848] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.520802] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035216] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.815569] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.730396] kauditd_printk_skb: 36 callbacks suppressed
	[Oct29 08:19] kauditd_printk_skb: 8 callbacks suppressed
	[Oct29 08:21] overlayfs: idmapped layers are currently not supported
	[  +0.080642] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [2a94afd232256c9970e37e3077aaf55baec83c1b05f44ac0cb94c7d529e48160] <==
	{"level":"warn","ts":"2025-10-29T08:21:18.493552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.512935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.524726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.545119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.561026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.580254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.606238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.617266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.627889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.649031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.671740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.681612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.705173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.720898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.739164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.765265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.780704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.801030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:18.892539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:34.955015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:34.973242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:56.921150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:56.942886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:56.981115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:56.996136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36044","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [69ba1f444f956f1dcc0f5189a80713046222b26a14a91565d52356faf252a0c1] <==
	2025/10/29 08:23:04 GCP Auth Webhook started!
	2025/10/29 08:23:13 Ready to marshal response ...
	2025/10/29 08:23:13 Ready to write response ...
	2025/10/29 08:23:13 Ready to marshal response ...
	2025/10/29 08:23:13 Ready to write response ...
	2025/10/29 08:23:13 Ready to marshal response ...
	2025/10/29 08:23:13 Ready to write response ...
	
	
	==> kernel <==
	 08:23:26 up 5 min,  0 user,  load average: 2.73, 1.49, 0.61
	Linux addons-757691 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bdb041cabd34f35415d6aa99e1925090bda9745d10bfd7e1e4a7ce721cfb04de] <==
	I1029 08:21:28.459700       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1029 08:21:58.460035       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1029 08:21:58.460182       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1029 08:21:58.460212       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1029 08:21:58.461066       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1029 08:21:59.959698       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 08:21:59.959736       1 metrics.go:72] Registering metrics
	I1029 08:21:59.959802       1 controller.go:711] "Syncing nftables rules"
	E1029 08:21:59.960153       1 controller.go:417] "reading nfqueue stats" err="open /proc/net/netfilter/nfnetlink_queue: no such file or directory"
	I1029 08:22:08.459818       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:22:08.459872       1 main.go:301] handling current node
	I1029 08:22:18.459780       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:22:18.459809       1 main.go:301] handling current node
	I1029 08:22:28.459619       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:22:28.459652       1 main.go:301] handling current node
	I1029 08:22:38.459735       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:22:38.459775       1 main.go:301] handling current node
	I1029 08:22:48.459835       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:22:48.459873       1 main.go:301] handling current node
	I1029 08:22:58.460067       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:22:58.460130       1 main.go:301] handling current node
	I1029 08:23:08.459188       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:23:08.459219       1 main.go:301] handling current node
	I1029 08:23:18.459144       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:23:18.459256       1 main.go:301] handling current node
	
	
	==> kube-apiserver [df417919fab6fd07c060b65a32c9220edeee697791536b0fa3a6e2baada5b377] <==
	W1029 08:21:34.952261       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1029 08:21:34.967814       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1029 08:21:37.796716       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.109.17.24"}
	W1029 08:21:56.915501       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1029 08:21:56.934903       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1029 08:21:56.981127       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1029 08:21:56.995918       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1029 08:22:09.060137       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.17.24:443: connect: connection refused
	E1029 08:22:09.060273       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.17.24:443: connect: connection refused" logger="UnhandledError"
	W1029 08:22:09.092563       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.17.24:443: connect: connection refused
	E1029 08:22:09.092678       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.17.24:443: connect: connection refused" logger="UnhandledError"
	W1029 08:22:09.177139       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.17.24:443: connect: connection refused
	E1029 08:22:09.177390       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.17.24:443: connect: connection refused" logger="UnhandledError"
	E1029 08:22:27.137053       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.141.158:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.141.158:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.141.158:443: connect: connection refused" logger="UnhandledError"
	W1029 08:22:27.142617       1 handler_proxy.go:99] no RequestInfo found in the context
	E1029 08:22:27.142685       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1029 08:22:27.143531       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.141.158:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.141.158:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.141.158:443: connect: connection refused" logger="UnhandledError"
	E1029 08:22:27.149485       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.141.158:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.141.158:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.141.158:443: connect: connection refused" logger="UnhandledError"
	E1029 08:22:27.161003       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.141.158:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.141.158:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.141.158:443: connect: connection refused" logger="UnhandledError"
	I1029 08:22:27.278367       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1029 08:23:23.577173       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:40844: use of closed network connection
	E1029 08:23:23.948477       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:40892: use of closed network connection
	
	
	==> kube-controller-manager [349c9103101d7725e278ac33a2d7d761e55f35837d834c1cec2dbbfe3add8d47] <==
	I1029 08:21:26.945995       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1029 08:21:26.946062       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-757691"
	I1029 08:21:26.946103       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1029 08:21:26.946974       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1029 08:21:26.947047       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1029 08:21:26.948025       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1029 08:21:26.948073       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1029 08:21:26.948097       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1029 08:21:26.948276       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1029 08:21:26.948566       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1029 08:21:26.948750       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1029 08:21:26.949997       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1029 08:21:26.950070       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1029 08:21:26.963109       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 08:21:26.969230       1 shared_informer.go:356] "Caches are synced" controller="service account"
	E1029 08:21:56.906934       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1029 08:21:56.907091       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1029 08:21:56.907154       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1029 08:21:56.954284       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1029 08:21:56.965662       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1029 08:21:57.007857       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 08:21:57.072231       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 08:22:11.954238       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1029 08:22:27.014609       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1029 08:22:27.150713       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [fb05a0521754d6e3abce78732cce5547c6dfcfddd236c0d82161786ca543e41b] <==
	I1029 08:21:28.316489       1 server_linux.go:53] "Using iptables proxy"
	I1029 08:21:28.533835       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 08:21:28.634214       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 08:21:28.634258       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1029 08:21:28.634324       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 08:21:28.861207       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 08:21:28.861339       1 server_linux.go:132] "Using iptables Proxier"
	I1029 08:21:28.869880       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 08:21:28.890231       1 server.go:527] "Version info" version="v1.34.1"
	I1029 08:21:28.890260       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 08:21:28.894739       1 config.go:200] "Starting service config controller"
	I1029 08:21:28.894754       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 08:21:28.894776       1 config.go:106] "Starting endpoint slice config controller"
	I1029 08:21:28.894780       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 08:21:28.894804       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 08:21:28.894809       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 08:21:28.926331       1 config.go:309] "Starting node config controller"
	I1029 08:21:28.926359       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 08:21:28.926368       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 08:21:28.996483       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 08:21:28.996526       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 08:21:28.996578       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6fb3b53c30069d80f0ce7ee16f7eedad1c380d15ce86f571d6bbe59e3f920970] <==
	I1029 08:21:20.043946       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1029 08:21:20.048684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1029 08:21:20.048888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1029 08:21:20.048963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1029 08:21:20.049035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1029 08:21:20.053130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1029 08:21:20.053390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1029 08:21:20.053493       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1029 08:21:20.053585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1029 08:21:20.053678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1029 08:21:20.053800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1029 08:21:20.053892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1029 08:21:20.053978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1029 08:21:20.054064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1029 08:21:20.054151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1029 08:21:20.054236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1029 08:21:20.054340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1029 08:21:20.054429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1029 08:21:20.054571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1029 08:21:20.054719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1029 08:21:21.074110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1029 08:21:21.078729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1029 08:21:21.081976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1029 08:21:21.139830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1029 08:21:22.743792       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 08:22:46 addons-757691 kubelet[1271]: I1029 08:22:46.218214    1271 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-k472l" secret="" err="secret \"gcp-auth\" not found"
	Oct 29 08:22:46 addons-757691 kubelet[1271]: I1029 08:22:46.233968    1271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/nvidia-device-plugin-daemonset-k472l" podStartSLOduration=1.91445395 podStartE2EDuration="37.233952126s" podCreationTimestamp="2025-10-29 08:22:09 +0000 UTC" firstStartedPulling="2025-10-29 08:22:10.471278554 +0000 UTC m=+48.047340244" lastFinishedPulling="2025-10-29 08:22:45.79077673 +0000 UTC m=+83.366838420" observedRunningTime="2025-10-29 08:22:46.232808229 +0000 UTC m=+83.808869919" watchObservedRunningTime="2025-10-29 08:22:46.233952126 +0000 UTC m=+83.810013824"
	Oct 29 08:22:47 addons-757691 kubelet[1271]: I1029 08:22:47.221012    1271 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-k472l" secret="" err="secret \"gcp-auth\" not found"
	Oct 29 08:22:50 addons-757691 kubelet[1271]: I1029 08:22:50.260530    1271 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-wsh7n" secret="" err="secret \"gcp-auth\" not found"
	Oct 29 08:22:50 addons-757691 kubelet[1271]: I1029 08:22:50.290341    1271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-wsh7n" podStartSLOduration=2.590972825 podStartE2EDuration="41.290320554s" podCreationTimestamp="2025-10-29 08:22:09 +0000 UTC" firstStartedPulling="2025-10-29 08:22:10.483553102 +0000 UTC m=+48.059614800" lastFinishedPulling="2025-10-29 08:22:49.182900839 +0000 UTC m=+86.758962529" observedRunningTime="2025-10-29 08:22:50.289155143 +0000 UTC m=+87.865216833" watchObservedRunningTime="2025-10-29 08:22:50.290320554 +0000 UTC m=+87.866382243"
	Oct 29 08:22:50 addons-757691 kubelet[1271]: I1029 08:22:50.529555    1271 scope.go:117] "RemoveContainer" containerID="dca7c0c10447a2c3374ce3c4f24660e64ccfc51fb86c07885e372f778624ff97"
	Oct 29 08:22:51 addons-757691 kubelet[1271]: I1029 08:22:51.263631    1271 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-wsh7n" secret="" err="secret \"gcp-auth\" not found"
	Oct 29 08:22:53 addons-757691 kubelet[1271]: I1029 08:22:53.272091    1271 scope.go:117] "RemoveContainer" containerID="dca7c0c10447a2c3374ce3c4f24660e64ccfc51fb86c07885e372f778624ff97"
	Oct 29 08:22:53 addons-757691 kubelet[1271]: I1029 08:22:53.327927    1271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-lfsrs" podStartSLOduration=65.828921194 podStartE2EDuration="1m20.327899651s" podCreationTimestamp="2025-10-29 08:21:33 +0000 UTC" firstStartedPulling="2025-10-29 08:22:38.674038686 +0000 UTC m=+76.250100376" lastFinishedPulling="2025-10-29 08:22:53.173017135 +0000 UTC m=+90.749078833" observedRunningTime="2025-10-29 08:22:53.327152911 +0000 UTC m=+90.903214617" watchObservedRunningTime="2025-10-29 08:22:53.327899651 +0000 UTC m=+90.903961349"
	Oct 29 08:22:54 addons-757691 kubelet[1271]: I1029 08:22:54.674430    1271 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lcvw\" (UniqueName: \"kubernetes.io/projected/e31221ec-3307-42df-87c6-840b25361cab-kube-api-access-7lcvw\") pod \"e31221ec-3307-42df-87c6-840b25361cab\" (UID: \"e31221ec-3307-42df-87c6-840b25361cab\") "
	Oct 29 08:22:54 addons-757691 kubelet[1271]: I1029 08:22:54.684495    1271 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e31221ec-3307-42df-87c6-840b25361cab-kube-api-access-7lcvw" (OuterVolumeSpecName: "kube-api-access-7lcvw") pod "e31221ec-3307-42df-87c6-840b25361cab" (UID: "e31221ec-3307-42df-87c6-840b25361cab"). InnerVolumeSpecName "kube-api-access-7lcvw". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 29 08:22:54 addons-757691 kubelet[1271]: I1029 08:22:54.775794    1271 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7lcvw\" (UniqueName: \"kubernetes.io/projected/e31221ec-3307-42df-87c6-840b25361cab-kube-api-access-7lcvw\") on node \"addons-757691\" DevicePath \"\""
	Oct 29 08:22:55 addons-757691 kubelet[1271]: I1029 08:22:55.343594    1271 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ca7b39cb07e022114af8b0bc3a8adda31eee24ee45edd6774fd8a50d8ed280d"
	Oct 29 08:23:04 addons-757691 kubelet[1271]: I1029 08:23:04.415623    1271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-7c65l" podStartSLOduration=64.965008701 podStartE2EDuration="1m27.415608743s" podCreationTimestamp="2025-10-29 08:21:37 +0000 UTC" firstStartedPulling="2025-10-29 08:22:41.653827655 +0000 UTC m=+79.229889345" lastFinishedPulling="2025-10-29 08:23:04.104427697 +0000 UTC m=+101.680489387" observedRunningTime="2025-10-29 08:23:04.414743544 +0000 UTC m=+101.990805316" watchObservedRunningTime="2025-10-29 08:23:04.415608743 +0000 UTC m=+101.991670441"
	Oct 29 08:23:04 addons-757691 kubelet[1271]: I1029 08:23:04.416277    1271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-8xwgl" podStartSLOduration=70.650669006 podStartE2EDuration="1m30.416268581s" podCreationTimestamp="2025-10-29 08:21:34 +0000 UTC" firstStartedPulling="2025-10-29 08:22:41.261830377 +0000 UTC m=+78.837892067" lastFinishedPulling="2025-10-29 08:23:01.027429944 +0000 UTC m=+98.603491642" observedRunningTime="2025-10-29 08:23:01.409973401 +0000 UTC m=+98.986035132" watchObservedRunningTime="2025-10-29 08:23:04.416268581 +0000 UTC m=+101.992330271"
	Oct 29 08:23:06 addons-757691 kubelet[1271]: I1029 08:23:06.782961    1271 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 29 08:23:06 addons-757691 kubelet[1271]: I1029 08:23:06.783040    1271 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 29 08:23:12 addons-757691 kubelet[1271]: I1029 08:23:12.045646    1271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-gzlfm" podStartSLOduration=3.977729953 podStartE2EDuration="1m3.045624454s" podCreationTimestamp="2025-10-29 08:22:09 +0000 UTC" firstStartedPulling="2025-10-29 08:22:10.263862691 +0000 UTC m=+47.839924381" lastFinishedPulling="2025-10-29 08:23:09.331757192 +0000 UTC m=+106.907818882" observedRunningTime="2025-10-29 08:23:09.479543597 +0000 UTC m=+107.055605295" watchObservedRunningTime="2025-10-29 08:23:12.045624454 +0000 UTC m=+109.621686144"
	Oct 29 08:23:12 addons-757691 kubelet[1271]: I1029 08:23:12.537521    1271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4958d670-acaf-4862-ba9c-fbf319f70208" path="/var/lib/kubelet/pods/4958d670-acaf-4862-ba9c-fbf319f70208/volumes"
	Oct 29 08:23:13 addons-757691 kubelet[1271]: E1029 08:23:13.047735    1271 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 29 08:23:13 addons-757691 kubelet[1271]: E1029 08:23:13.047836    1271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc216c07-bc5f-4a39-a59b-999712532cfd-gcr-creds podName:dc216c07-bc5f-4a39-a59b-999712532cfd nodeName:}" failed. No retries permitted until 2025-10-29 08:24:17.047817774 +0000 UTC m=+174.623879464 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/dc216c07-bc5f-4a39-a59b-999712532cfd-gcr-creds") pod "registry-creds-764b6fb674-7wrll" (UID: "dc216c07-bc5f-4a39-a59b-999712532cfd") : secret "registry-creds-gcr" not found
	Oct 29 08:23:13 addons-757691 kubelet[1271]: I1029 08:23:13.451853    1271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3136dfda-447e-4351-bffc-ab9f47a42a8b-gcp-creds\") pod \"busybox\" (UID: \"3136dfda-447e-4351-bffc-ab9f47a42a8b\") " pod="default/busybox"
	Oct 29 08:23:13 addons-757691 kubelet[1271]: I1029 08:23:13.452097    1271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4w6b\" (UniqueName: \"kubernetes.io/projected/3136dfda-447e-4351-bffc-ab9f47a42a8b-kube-api-access-d4w6b\") pod \"busybox\" (UID: \"3136dfda-447e-4351-bffc-ab9f47a42a8b\") " pod="default/busybox"
	Oct 29 08:23:16 addons-757691 kubelet[1271]: I1029 08:23:16.486976    1271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.5113219070000001 podStartE2EDuration="3.486956025s" podCreationTimestamp="2025-10-29 08:23:13 +0000 UTC" firstStartedPulling="2025-10-29 08:23:13.634185847 +0000 UTC m=+111.210247545" lastFinishedPulling="2025-10-29 08:23:15.609819973 +0000 UTC m=+113.185881663" observedRunningTime="2025-10-29 08:23:16.485785896 +0000 UTC m=+114.061847594" watchObservedRunningTime="2025-10-29 08:23:16.486956025 +0000 UTC m=+114.063017715"
	Oct 29 08:23:22 addons-757691 kubelet[1271]: I1029 08:23:22.590309    1271 scope.go:117] "RemoveContainer" containerID="0fac6054e444479169b91fb479bb04a3b40907ebcf5fb3a0d5a2b999e968a09e"
	
	
	==> storage-provisioner [bc4be5a012bc9f8e39fa97fa9dfd2e049f3d28d71ee13ad96c3db8f172403a78] <==
	W1029 08:23:00.525920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:23:02.536120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:23:02.545495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:23:04.548350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:23:04.553783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:23:06.557587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:23:06.566481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:23:08.569486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:23:08.574833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:23:10.578971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:23:10.583519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:23:12.587101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:23:12.592120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:23:14.595374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:23:14.602964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:23:16.606481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:23:16.613809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:23:18.616404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:23:18.620932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:23:20.624489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:23:20.631130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:23:22.634670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:23:22.643206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:23:24.647286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:23:24.654199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-757691 -n addons-757691
helpers_test.go:269: (dbg) Run:  kubectl --context addons-757691 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-6btnm ingress-nginx-admission-patch-gtc6l registry-creds-764b6fb674-7wrll
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-757691 describe pod ingress-nginx-admission-create-6btnm ingress-nginx-admission-patch-gtc6l registry-creds-764b6fb674-7wrll
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-757691 describe pod ingress-nginx-admission-create-6btnm ingress-nginx-admission-patch-gtc6l registry-creds-764b6fb674-7wrll: exit status 1 (99.955269ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-6btnm" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-gtc6l" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-7wrll" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-757691 describe pod ingress-nginx-admission-create-6btnm ingress-nginx-admission-patch-gtc6l registry-creds-764b6fb674-7wrll: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-757691 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-757691 addons disable headlamp --alsologtostderr -v=1: exit status 11 (259.991188ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:23:27.260486   12003 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:23:27.260678   12003 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:23:27.260691   12003 out.go:374] Setting ErrFile to fd 2...
	I1029 08:23:27.260698   12003 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:23:27.260965   12003 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 08:23:27.261285   12003 mustload.go:66] Loading cluster: addons-757691
	I1029 08:23:27.261699   12003 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:23:27.261719   12003 addons.go:607] checking whether the cluster is paused
	I1029 08:23:27.261862   12003 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:23:27.261880   12003 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:23:27.262393   12003 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:23:27.280004   12003 ssh_runner.go:195] Run: systemctl --version
	I1029 08:23:27.280075   12003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:23:27.297822   12003 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:23:27.403773   12003 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:23:27.404657   12003 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:23:27.437310   12003 cri.go:89] found id: "ee8944794e8050551c59ad29f1e3e516d055471261079ddb98ad1b18d85f8d62"
	I1029 08:23:27.437386   12003 cri.go:89] found id: "32f7a28d2d03b12a04f38527066ab5cdace38391dbd7e81a25de50ac95ea189d"
	I1029 08:23:27.437405   12003 cri.go:89] found id: "239ec534461a096cf94705920f445c2256dd88aaa699d21479b90194a3837f9b"
	I1029 08:23:27.437427   12003 cri.go:89] found id: "0555333eb38f561643aa85f1253ffad88ad99d3734392074f633148511ce3081"
	I1029 08:23:27.437465   12003 cri.go:89] found id: "b7ebb9338f4b71874206cc6aa8143d99e673a9cca1b219506840b748ac705b60"
	I1029 08:23:27.437490   12003 cri.go:89] found id: "861cd9d17d1a25a1554adc0ae16a417206ae256ce09efb8acbb8fbdfd34b1733"
	I1029 08:23:27.437513   12003 cri.go:89] found id: "4f38205b7fd4d543287d30e2654b8b18c64c68ac9936ecc6de021a7f18188c65"
	I1029 08:23:27.437551   12003 cri.go:89] found id: "080445adfb2737e11888db144d48240f8f457851f5dd235ba8ac2de2d56a6f02"
	I1029 08:23:27.437576   12003 cri.go:89] found id: "525382941facb4662c4472842cc827c30b969d0ba588b1fe4bd1ab1a8be43d02"
	I1029 08:23:27.437603   12003 cri.go:89] found id: "c8fe768126de326968797194f6739f6b4dffc8edd42a7e3da422ab55d6c46d31"
	I1029 08:23:27.437640   12003 cri.go:89] found id: "444ef3af30aeb87e6a1cef7fe02d50c1eeb0628ff4d53cf0d6d76407448af653"
	I1029 08:23:27.437664   12003 cri.go:89] found id: "a89be2ad8c3cbb179996675c4f579261e541010fed42ffff33e36d897e051d6f"
	I1029 08:23:27.437687   12003 cri.go:89] found id: "03254ae94d330d94320842bf836194b38de9aa234ed810020b44739f573b3a1f"
	I1029 08:23:27.437725   12003 cri.go:89] found id: "380a55eebf3cdcc226730df7d2181cf069c2ff5fa31ba1bd7f7ecbdbb1a00c53"
	I1029 08:23:27.437750   12003 cri.go:89] found id: "dbc66dc27a6154e247feb539a4148136556f003707e138999e20759485b59218"
	I1029 08:23:27.437784   12003 cri.go:89] found id: "561fd8a7601359c5c1ac06320b6c023314bf2d9c888338eb6db0cb74cf760ad6"
	I1029 08:23:27.437826   12003 cri.go:89] found id: "bc4be5a012bc9f8e39fa97fa9dfd2e049f3d28d71ee13ad96c3db8f172403a78"
	I1029 08:23:27.437850   12003 cri.go:89] found id: "fb05a0521754d6e3abce78732cce5547c6dfcfddd236c0d82161786ca543e41b"
	I1029 08:23:27.437875   12003 cri.go:89] found id: "bdb041cabd34f35415d6aa99e1925090bda9745d10bfd7e1e4a7ce721cfb04de"
	I1029 08:23:27.437911   12003 cri.go:89] found id: "349c9103101d7725e278ac33a2d7d761e55f35837d834c1cec2dbbfe3add8d47"
	I1029 08:23:27.437938   12003 cri.go:89] found id: "6fb3b53c30069d80f0ce7ee16f7eedad1c380d15ce86f571d6bbe59e3f920970"
	I1029 08:23:27.437960   12003 cri.go:89] found id: "df417919fab6fd07c060b65a32c9220edeee697791536b0fa3a6e2baada5b377"
	I1029 08:23:27.437997   12003 cri.go:89] found id: "2a94afd232256c9970e37e3077aaf55baec83c1b05f44ac0cb94c7d529e48160"
	I1029 08:23:27.438022   12003 cri.go:89] found id: ""
	I1029 08:23:27.438117   12003 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 08:23:27.456569   12003 out.go:203] 
	W1029 08:23:27.459551   12003 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:23:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:23:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 08:23:27.459652   12003 out.go:285] * 
	* 
	W1029 08:23:27.463975   12003 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:23:27.466955   12003 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-757691 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.24s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-ddvrf" [f50275f8-6dc2-47e1-b549-51c666dea492] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003390355s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-757691 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-757691 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (291.665903ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:24:33.391518   13957 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:24:33.391742   13957 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:24:33.391774   13957 out.go:374] Setting ErrFile to fd 2...
	I1029 08:24:33.391795   13957 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:24:33.392060   13957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 08:24:33.392418   13957 mustload.go:66] Loading cluster: addons-757691
	I1029 08:24:33.392858   13957 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:24:33.392902   13957 addons.go:607] checking whether the cluster is paused
	I1029 08:24:33.393043   13957 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:24:33.393080   13957 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:24:33.393573   13957 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:24:33.420483   13957 ssh_runner.go:195] Run: systemctl --version
	I1029 08:24:33.420537   13957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:24:33.441461   13957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:24:33.543044   13957 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:24:33.543127   13957 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:24:33.573296   13957 cri.go:89] found id: "6c75b49eb3056f1dc436aa728bf44b6683e55f147c35db10932181485753a576"
	I1029 08:24:33.573319   13957 cri.go:89] found id: "ee8944794e8050551c59ad29f1e3e516d055471261079ddb98ad1b18d85f8d62"
	I1029 08:24:33.573324   13957 cri.go:89] found id: "32f7a28d2d03b12a04f38527066ab5cdace38391dbd7e81a25de50ac95ea189d"
	I1029 08:24:33.573328   13957 cri.go:89] found id: "239ec534461a096cf94705920f445c2256dd88aaa699d21479b90194a3837f9b"
	I1029 08:24:33.573332   13957 cri.go:89] found id: "0555333eb38f561643aa85f1253ffad88ad99d3734392074f633148511ce3081"
	I1029 08:24:33.573335   13957 cri.go:89] found id: "b7ebb9338f4b71874206cc6aa8143d99e673a9cca1b219506840b748ac705b60"
	I1029 08:24:33.573338   13957 cri.go:89] found id: "861cd9d17d1a25a1554adc0ae16a417206ae256ce09efb8acbb8fbdfd34b1733"
	I1029 08:24:33.573341   13957 cri.go:89] found id: "4f38205b7fd4d543287d30e2654b8b18c64c68ac9936ecc6de021a7f18188c65"
	I1029 08:24:33.573344   13957 cri.go:89] found id: "080445adfb2737e11888db144d48240f8f457851f5dd235ba8ac2de2d56a6f02"
	I1029 08:24:33.573351   13957 cri.go:89] found id: "525382941facb4662c4472842cc827c30b969d0ba588b1fe4bd1ab1a8be43d02"
	I1029 08:24:33.573354   13957 cri.go:89] found id: "c8fe768126de326968797194f6739f6b4dffc8edd42a7e3da422ab55d6c46d31"
	I1029 08:24:33.573357   13957 cri.go:89] found id: "444ef3af30aeb87e6a1cef7fe02d50c1eeb0628ff4d53cf0d6d76407448af653"
	I1029 08:24:33.573361   13957 cri.go:89] found id: "a89be2ad8c3cbb179996675c4f579261e541010fed42ffff33e36d897e051d6f"
	I1029 08:24:33.573364   13957 cri.go:89] found id: "03254ae94d330d94320842bf836194b38de9aa234ed810020b44739f573b3a1f"
	I1029 08:24:33.573368   13957 cri.go:89] found id: "380a55eebf3cdcc226730df7d2181cf069c2ff5fa31ba1bd7f7ecbdbb1a00c53"
	I1029 08:24:33.573393   13957 cri.go:89] found id: "dbc66dc27a6154e247feb539a4148136556f003707e138999e20759485b59218"
	I1029 08:24:33.573401   13957 cri.go:89] found id: "561fd8a7601359c5c1ac06320b6c023314bf2d9c888338eb6db0cb74cf760ad6"
	I1029 08:24:33.573406   13957 cri.go:89] found id: "bc4be5a012bc9f8e39fa97fa9dfd2e049f3d28d71ee13ad96c3db8f172403a78"
	I1029 08:24:33.573410   13957 cri.go:89] found id: "fb05a0521754d6e3abce78732cce5547c6dfcfddd236c0d82161786ca543e41b"
	I1029 08:24:33.573413   13957 cri.go:89] found id: "bdb041cabd34f35415d6aa99e1925090bda9745d10bfd7e1e4a7ce721cfb04de"
	I1029 08:24:33.573418   13957 cri.go:89] found id: "349c9103101d7725e278ac33a2d7d761e55f35837d834c1cec2dbbfe3add8d47"
	I1029 08:24:33.573421   13957 cri.go:89] found id: "6fb3b53c30069d80f0ce7ee16f7eedad1c380d15ce86f571d6bbe59e3f920970"
	I1029 08:24:33.573424   13957 cri.go:89] found id: "df417919fab6fd07c060b65a32c9220edeee697791536b0fa3a6e2baada5b377"
	I1029 08:24:33.573427   13957 cri.go:89] found id: "2a94afd232256c9970e37e3077aaf55baec83c1b05f44ac0cb94c7d529e48160"
	I1029 08:24:33.573430   13957 cri.go:89] found id: ""
	I1029 08:24:33.573478   13957 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 08:24:33.588776   13957 out.go:203] 
	W1029 08:24:33.591795   13957 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:24:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:24:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 08:24:33.591823   13957 out.go:285] * 
	* 
	W1029 08:24:33.596184   13957 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:24:33.599102   13957 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-757691 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.30s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.52s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-757691 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-757691 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757691 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757691 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757691 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757691 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-757691 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [99352a4b-83c2-4180-bd2f-c411b1f911ab] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [99352a4b-83c2-4180-bd2f-c411b1f911ab] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [99352a4b-83c2-4180-bd2f-c411b1f911ab] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003514417s
addons_test.go:967: (dbg) Run:  kubectl --context addons-757691 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-757691 ssh "cat /opt/local-path-provisioner/pvc-e1dc20ec-fec2-44cc-ac2b-af307dd1a9cc_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-757691 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-757691 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-757691 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-757691 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (280.606518ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:24:28.070511   13848 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:24:28.070723   13848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:24:28.070737   13848 out.go:374] Setting ErrFile to fd 2...
	I1029 08:24:28.070743   13848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:24:28.071629   13848 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 08:24:28.072053   13848 mustload.go:66] Loading cluster: addons-757691
	I1029 08:24:28.072604   13848 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:24:28.072629   13848 addons.go:607] checking whether the cluster is paused
	I1029 08:24:28.072795   13848 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:24:28.072822   13848 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:24:28.073388   13848 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:24:28.093873   13848 ssh_runner.go:195] Run: systemctl --version
	I1029 08:24:28.093936   13848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:24:28.112747   13848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:24:28.226897   13848 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:24:28.226983   13848 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:24:28.255271   13848 cri.go:89] found id: "6c75b49eb3056f1dc436aa728bf44b6683e55f147c35db10932181485753a576"
	I1029 08:24:28.255290   13848 cri.go:89] found id: "ee8944794e8050551c59ad29f1e3e516d055471261079ddb98ad1b18d85f8d62"
	I1029 08:24:28.255295   13848 cri.go:89] found id: "32f7a28d2d03b12a04f38527066ab5cdace38391dbd7e81a25de50ac95ea189d"
	I1029 08:24:28.255299   13848 cri.go:89] found id: "239ec534461a096cf94705920f445c2256dd88aaa699d21479b90194a3837f9b"
	I1029 08:24:28.255304   13848 cri.go:89] found id: "0555333eb38f561643aa85f1253ffad88ad99d3734392074f633148511ce3081"
	I1029 08:24:28.255307   13848 cri.go:89] found id: "b7ebb9338f4b71874206cc6aa8143d99e673a9cca1b219506840b748ac705b60"
	I1029 08:24:28.255310   13848 cri.go:89] found id: "861cd9d17d1a25a1554adc0ae16a417206ae256ce09efb8acbb8fbdfd34b1733"
	I1029 08:24:28.255313   13848 cri.go:89] found id: "4f38205b7fd4d543287d30e2654b8b18c64c68ac9936ecc6de021a7f18188c65"
	I1029 08:24:28.255317   13848 cri.go:89] found id: "080445adfb2737e11888db144d48240f8f457851f5dd235ba8ac2de2d56a6f02"
	I1029 08:24:28.255322   13848 cri.go:89] found id: "525382941facb4662c4472842cc827c30b969d0ba588b1fe4bd1ab1a8be43d02"
	I1029 08:24:28.255326   13848 cri.go:89] found id: "c8fe768126de326968797194f6739f6b4dffc8edd42a7e3da422ab55d6c46d31"
	I1029 08:24:28.255329   13848 cri.go:89] found id: "444ef3af30aeb87e6a1cef7fe02d50c1eeb0628ff4d53cf0d6d76407448af653"
	I1029 08:24:28.255332   13848 cri.go:89] found id: "a89be2ad8c3cbb179996675c4f579261e541010fed42ffff33e36d897e051d6f"
	I1029 08:24:28.255335   13848 cri.go:89] found id: "03254ae94d330d94320842bf836194b38de9aa234ed810020b44739f573b3a1f"
	I1029 08:24:28.255338   13848 cri.go:89] found id: "380a55eebf3cdcc226730df7d2181cf069c2ff5fa31ba1bd7f7ecbdbb1a00c53"
	I1029 08:24:28.255343   13848 cri.go:89] found id: "dbc66dc27a6154e247feb539a4148136556f003707e138999e20759485b59218"
	I1029 08:24:28.255346   13848 cri.go:89] found id: "561fd8a7601359c5c1ac06320b6c023314bf2d9c888338eb6db0cb74cf760ad6"
	I1029 08:24:28.255350   13848 cri.go:89] found id: "bc4be5a012bc9f8e39fa97fa9dfd2e049f3d28d71ee13ad96c3db8f172403a78"
	I1029 08:24:28.255353   13848 cri.go:89] found id: "fb05a0521754d6e3abce78732cce5547c6dfcfddd236c0d82161786ca543e41b"
	I1029 08:24:28.255356   13848 cri.go:89] found id: "bdb041cabd34f35415d6aa99e1925090bda9745d10bfd7e1e4a7ce721cfb04de"
	I1029 08:24:28.255362   13848 cri.go:89] found id: "349c9103101d7725e278ac33a2d7d761e55f35837d834c1cec2dbbfe3add8d47"
	I1029 08:24:28.255366   13848 cri.go:89] found id: "6fb3b53c30069d80f0ce7ee16f7eedad1c380d15ce86f571d6bbe59e3f920970"
	I1029 08:24:28.255368   13848 cri.go:89] found id: "df417919fab6fd07c060b65a32c9220edeee697791536b0fa3a6e2baada5b377"
	I1029 08:24:28.255372   13848 cri.go:89] found id: "2a94afd232256c9970e37e3077aaf55baec83c1b05f44ac0cb94c7d529e48160"
	I1029 08:24:28.255374   13848 cri.go:89] found id: ""
	I1029 08:24:28.255435   13848 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 08:24:28.285426   13848 out.go:203] 
	W1029 08:24:28.288663   13848 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:24:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:24:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 08:24:28.288694   13848 out.go:285] * 
	* 
	W1029 08:24:28.293091   13848 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:24:28.296206   13848 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-757691 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.52s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-k472l" [a67db85b-cb6e-4585-82c5-297b38983141] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003994377s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-757691 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-757691 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (267.520637ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:24:13.312493   13400 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:24:13.312678   13400 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:24:13.312690   13400 out.go:374] Setting ErrFile to fd 2...
	I1029 08:24:13.312695   13400 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:24:13.312996   13400 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 08:24:13.313301   13400 mustload.go:66] Loading cluster: addons-757691
	I1029 08:24:13.313767   13400 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:24:13.313800   13400 addons.go:607] checking whether the cluster is paused
	I1029 08:24:13.314674   13400 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:24:13.314717   13400 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:24:13.315235   13400 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:24:13.331681   13400 ssh_runner.go:195] Run: systemctl --version
	I1029 08:24:13.331737   13400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:24:13.349753   13400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:24:13.458725   13400 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:24:13.458842   13400 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:24:13.491228   13400 cri.go:89] found id: "ee8944794e8050551c59ad29f1e3e516d055471261079ddb98ad1b18d85f8d62"
	I1029 08:24:13.491262   13400 cri.go:89] found id: "32f7a28d2d03b12a04f38527066ab5cdace38391dbd7e81a25de50ac95ea189d"
	I1029 08:24:13.491267   13400 cri.go:89] found id: "239ec534461a096cf94705920f445c2256dd88aaa699d21479b90194a3837f9b"
	I1029 08:24:13.491272   13400 cri.go:89] found id: "0555333eb38f561643aa85f1253ffad88ad99d3734392074f633148511ce3081"
	I1029 08:24:13.491275   13400 cri.go:89] found id: "b7ebb9338f4b71874206cc6aa8143d99e673a9cca1b219506840b748ac705b60"
	I1029 08:24:13.491279   13400 cri.go:89] found id: "861cd9d17d1a25a1554adc0ae16a417206ae256ce09efb8acbb8fbdfd34b1733"
	I1029 08:24:13.491283   13400 cri.go:89] found id: "4f38205b7fd4d543287d30e2654b8b18c64c68ac9936ecc6de021a7f18188c65"
	I1029 08:24:13.491286   13400 cri.go:89] found id: "080445adfb2737e11888db144d48240f8f457851f5dd235ba8ac2de2d56a6f02"
	I1029 08:24:13.491291   13400 cri.go:89] found id: "525382941facb4662c4472842cc827c30b969d0ba588b1fe4bd1ab1a8be43d02"
	I1029 08:24:13.491299   13400 cri.go:89] found id: "c8fe768126de326968797194f6739f6b4dffc8edd42a7e3da422ab55d6c46d31"
	I1029 08:24:13.491302   13400 cri.go:89] found id: "444ef3af30aeb87e6a1cef7fe02d50c1eeb0628ff4d53cf0d6d76407448af653"
	I1029 08:24:13.491306   13400 cri.go:89] found id: "a89be2ad8c3cbb179996675c4f579261e541010fed42ffff33e36d897e051d6f"
	I1029 08:24:13.491310   13400 cri.go:89] found id: "03254ae94d330d94320842bf836194b38de9aa234ed810020b44739f573b3a1f"
	I1029 08:24:13.491323   13400 cri.go:89] found id: "380a55eebf3cdcc226730df7d2181cf069c2ff5fa31ba1bd7f7ecbdbb1a00c53"
	I1029 08:24:13.491327   13400 cri.go:89] found id: "dbc66dc27a6154e247feb539a4148136556f003707e138999e20759485b59218"
	I1029 08:24:13.491338   13400 cri.go:89] found id: "561fd8a7601359c5c1ac06320b6c023314bf2d9c888338eb6db0cb74cf760ad6"
	I1029 08:24:13.491347   13400 cri.go:89] found id: "bc4be5a012bc9f8e39fa97fa9dfd2e049f3d28d71ee13ad96c3db8f172403a78"
	I1029 08:24:13.491352   13400 cri.go:89] found id: "fb05a0521754d6e3abce78732cce5547c6dfcfddd236c0d82161786ca543e41b"
	I1029 08:24:13.491356   13400 cri.go:89] found id: "bdb041cabd34f35415d6aa99e1925090bda9745d10bfd7e1e4a7ce721cfb04de"
	I1029 08:24:13.491359   13400 cri.go:89] found id: "349c9103101d7725e278ac33a2d7d761e55f35837d834c1cec2dbbfe3add8d47"
	I1029 08:24:13.491364   13400 cri.go:89] found id: "6fb3b53c30069d80f0ce7ee16f7eedad1c380d15ce86f571d6bbe59e3f920970"
	I1029 08:24:13.491368   13400 cri.go:89] found id: "df417919fab6fd07c060b65a32c9220edeee697791536b0fa3a6e2baada5b377"
	I1029 08:24:13.491371   13400 cri.go:89] found id: "2a94afd232256c9970e37e3077aaf55baec83c1b05f44ac0cb94c7d529e48160"
	I1029 08:24:13.491374   13400 cri.go:89] found id: ""
	I1029 08:24:13.491435   13400 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 08:24:13.507119   13400 out.go:203] 
	W1029 08:24:13.510013   13400 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:24:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:24:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 08:24:13.510041   13400 out.go:285] * 
	* 
	W1029 08:24:13.514432   13400 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:24:13.517433   13400 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-757691 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.28s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-z7rr6" [8f104349-a77c-4435-a1ea-0a2bd0d810f2] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002819542s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-757691 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-757691 addons disable yakd --alsologtostderr -v=1: exit status 11 (250.968379ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:24:19.573811   13468 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:24:19.574047   13468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:24:19.574075   13468 out.go:374] Setting ErrFile to fd 2...
	I1029 08:24:19.574096   13468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:24:19.574393   13468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 08:24:19.574708   13468 mustload.go:66] Loading cluster: addons-757691
	I1029 08:24:19.575108   13468 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:24:19.575144   13468 addons.go:607] checking whether the cluster is paused
	I1029 08:24:19.575287   13468 config.go:182] Loaded profile config "addons-757691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:24:19.575317   13468 host.go:66] Checking if "addons-757691" exists ...
	I1029 08:24:19.575797   13468 cli_runner.go:164] Run: docker container inspect addons-757691 --format={{.State.Status}}
	I1029 08:24:19.593134   13468 ssh_runner.go:195] Run: systemctl --version
	I1029 08:24:19.593281   13468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-757691
	I1029 08:24:19.609817   13468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/addons-757691/id_rsa Username:docker}
	I1029 08:24:19.715641   13468 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:24:19.715730   13468 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:24:19.746001   13468 cri.go:89] found id: "ee8944794e8050551c59ad29f1e3e516d055471261079ddb98ad1b18d85f8d62"
	I1029 08:24:19.746032   13468 cri.go:89] found id: "32f7a28d2d03b12a04f38527066ab5cdace38391dbd7e81a25de50ac95ea189d"
	I1029 08:24:19.746037   13468 cri.go:89] found id: "239ec534461a096cf94705920f445c2256dd88aaa699d21479b90194a3837f9b"
	I1029 08:24:19.746051   13468 cri.go:89] found id: "0555333eb38f561643aa85f1253ffad88ad99d3734392074f633148511ce3081"
	I1029 08:24:19.746059   13468 cri.go:89] found id: "b7ebb9338f4b71874206cc6aa8143d99e673a9cca1b219506840b748ac705b60"
	I1029 08:24:19.746063   13468 cri.go:89] found id: "861cd9d17d1a25a1554adc0ae16a417206ae256ce09efb8acbb8fbdfd34b1733"
	I1029 08:24:19.746066   13468 cri.go:89] found id: "4f38205b7fd4d543287d30e2654b8b18c64c68ac9936ecc6de021a7f18188c65"
	I1029 08:24:19.746069   13468 cri.go:89] found id: "080445adfb2737e11888db144d48240f8f457851f5dd235ba8ac2de2d56a6f02"
	I1029 08:24:19.746072   13468 cri.go:89] found id: "525382941facb4662c4472842cc827c30b969d0ba588b1fe4bd1ab1a8be43d02"
	I1029 08:24:19.746078   13468 cri.go:89] found id: "c8fe768126de326968797194f6739f6b4dffc8edd42a7e3da422ab55d6c46d31"
	I1029 08:24:19.746081   13468 cri.go:89] found id: "444ef3af30aeb87e6a1cef7fe02d50c1eeb0628ff4d53cf0d6d76407448af653"
	I1029 08:24:19.746094   13468 cri.go:89] found id: "a89be2ad8c3cbb179996675c4f579261e541010fed42ffff33e36d897e051d6f"
	I1029 08:24:19.746098   13468 cri.go:89] found id: "03254ae94d330d94320842bf836194b38de9aa234ed810020b44739f573b3a1f"
	I1029 08:24:19.746102   13468 cri.go:89] found id: "380a55eebf3cdcc226730df7d2181cf069c2ff5fa31ba1bd7f7ecbdbb1a00c53"
	I1029 08:24:19.746105   13468 cri.go:89] found id: "dbc66dc27a6154e247feb539a4148136556f003707e138999e20759485b59218"
	I1029 08:24:19.746110   13468 cri.go:89] found id: "561fd8a7601359c5c1ac06320b6c023314bf2d9c888338eb6db0cb74cf760ad6"
	I1029 08:24:19.746113   13468 cri.go:89] found id: "bc4be5a012bc9f8e39fa97fa9dfd2e049f3d28d71ee13ad96c3db8f172403a78"
	I1029 08:24:19.746116   13468 cri.go:89] found id: "fb05a0521754d6e3abce78732cce5547c6dfcfddd236c0d82161786ca543e41b"
	I1029 08:24:19.746119   13468 cri.go:89] found id: "bdb041cabd34f35415d6aa99e1925090bda9745d10bfd7e1e4a7ce721cfb04de"
	I1029 08:24:19.746122   13468 cri.go:89] found id: "349c9103101d7725e278ac33a2d7d761e55f35837d834c1cec2dbbfe3add8d47"
	I1029 08:24:19.746127   13468 cri.go:89] found id: "6fb3b53c30069d80f0ce7ee16f7eedad1c380d15ce86f571d6bbe59e3f920970"
	I1029 08:24:19.746130   13468 cri.go:89] found id: "df417919fab6fd07c060b65a32c9220edeee697791536b0fa3a6e2baada5b377"
	I1029 08:24:19.746133   13468 cri.go:89] found id: "2a94afd232256c9970e37e3077aaf55baec83c1b05f44ac0cb94c7d529e48160"
	I1029 08:24:19.746136   13468 cri.go:89] found id: ""
	I1029 08:24:19.746200   13468 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 08:24:19.762024   13468 out.go:203] 
	W1029 08:24:19.765266   13468 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:24:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:24:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 08:24:19.765294   13468 out.go:285] * 
	* 
	W1029 08:24:19.769666   13468 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:24:19.772520   13468 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-757691 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-546837 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-546837 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-r8mwc" [2c3280d6-0927-4a4b-bf3e-263965e53c99] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-546837 -n functional-546837
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-29 08:40:34.957590724 +0000 UTC m=+1217.977167868
functional_test.go:1645: (dbg) Run:  kubectl --context functional-546837 describe po hello-node-connect-7d85dfc575-r8mwc -n default
functional_test.go:1645: (dbg) kubectl --context functional-546837 describe po hello-node-connect-7d85dfc575-r8mwc -n default:
Name:             hello-node-connect-7d85dfc575-r8mwc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-546837/192.168.49.2
Start Time:       Wed, 29 Oct 2025 08:30:34 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s5tww (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-s5tww:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-r8mwc to functional-546837
Normal   Pulling    7m4s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m4s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m4s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     4m48s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m37s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-546837 logs hello-node-connect-7d85dfc575-r8mwc -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-546837 logs hello-node-connect-7d85dfc575-r8mwc -n default: exit status 1 (104.888689ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-r8mwc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-546837 logs hello-node-connect-7d85dfc575-r8mwc -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-546837 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-r8mwc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-546837/192.168.49.2
Start Time:       Wed, 29 Oct 2025 08:30:34 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s5tww (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-s5tww:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-r8mwc to functional-546837
Normal   Pulling    7m4s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m4s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m4s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     4m48s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m37s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-546837 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-546837 logs -l app=hello-node-connect: exit status 1 (84.467809ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-r8mwc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-546837 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-546837 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.96.111.163
IPs:                      10.96.111.163
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31467/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-546837
helpers_test.go:243: (dbg) docker inspect functional-546837:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5c12e7d01af442284af45dc8be20f1557104d7877ba5186fdb20b26e76dec2a5",
	        "Created": "2025-10-29T08:27:33.19295803Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 20299,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T08:27:33.250808691Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/5c12e7d01af442284af45dc8be20f1557104d7877ba5186fdb20b26e76dec2a5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5c12e7d01af442284af45dc8be20f1557104d7877ba5186fdb20b26e76dec2a5/hostname",
	        "HostsPath": "/var/lib/docker/containers/5c12e7d01af442284af45dc8be20f1557104d7877ba5186fdb20b26e76dec2a5/hosts",
	        "LogPath": "/var/lib/docker/containers/5c12e7d01af442284af45dc8be20f1557104d7877ba5186fdb20b26e76dec2a5/5c12e7d01af442284af45dc8be20f1557104d7877ba5186fdb20b26e76dec2a5-json.log",
	        "Name": "/functional-546837",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-546837:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-546837",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5c12e7d01af442284af45dc8be20f1557104d7877ba5186fdb20b26e76dec2a5",
	                "LowerDir": "/var/lib/docker/overlay2/b7238b7418ad21ecda74e7182b847fec2aa3333a1c9a781aa6ac67d69456aff1-init/diff:/var/lib/docker/overlay2/512c003c31c6c889d7180677d0a7d04f9641d651bb7c0f829b95ca7e47c2836c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b7238b7418ad21ecda74e7182b847fec2aa3333a1c9a781aa6ac67d69456aff1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b7238b7418ad21ecda74e7182b847fec2aa3333a1c9a781aa6ac67d69456aff1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b7238b7418ad21ecda74e7182b847fec2aa3333a1c9a781aa6ac67d69456aff1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-546837",
	                "Source": "/var/lib/docker/volumes/functional-546837/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-546837",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-546837",
	                "name.minikube.sigs.k8s.io": "functional-546837",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9a64e1450e45660cc5319b85e1ad83c37c592d03faecd7ceda230f46c5381986",
	            "SandboxKey": "/var/run/docker/netns/9a64e1450e45",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-546837": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:53:47:d0:b1:b5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bf368d3511189db686a074a28063ad10cfc75fd1115dbab87c281f569edcd235",
	                    "EndpointID": "5ec63a201a1bf0ff01b344213ee97f81a54b99a5d3e2d5097c861f2b82837178",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-546837",
	                        "5c12e7d01af4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-546837 -n functional-546837
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-546837 logs -n 25: (1.501060611s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-546837 ssh sudo cat /etc/ssl/certs/4550.pem                                                                                                    │ functional-546837 │ jenkins │ v1.37.0 │ 29 Oct 25 08:30 UTC │ 29 Oct 25 08:30 UTC │
	│ image   │ functional-546837 image load --daemon kicbase/echo-server:functional-546837 --alsologtostderr                                                             │ functional-546837 │ jenkins │ v1.37.0 │ 29 Oct 25 08:30 UTC │ 29 Oct 25 08:30 UTC │
	│ ssh     │ functional-546837 ssh sudo cat /usr/share/ca-certificates/4550.pem                                                                                        │ functional-546837 │ jenkins │ v1.37.0 │ 29 Oct 25 08:30 UTC │ 29 Oct 25 08:30 UTC │
	│ ssh     │ functional-546837 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-546837 │ jenkins │ v1.37.0 │ 29 Oct 25 08:30 UTC │ 29 Oct 25 08:30 UTC │
	│ image   │ functional-546837 image ls                                                                                                                                │ functional-546837 │ jenkins │ v1.37.0 │ 29 Oct 25 08:30 UTC │ 29 Oct 25 08:30 UTC │
	│ ssh     │ functional-546837 ssh sudo cat /etc/ssl/certs/45502.pem                                                                                                   │ functional-546837 │ jenkins │ v1.37.0 │ 29 Oct 25 08:30 UTC │ 29 Oct 25 08:30 UTC │
	│ image   │ functional-546837 image load --daemon kicbase/echo-server:functional-546837 --alsologtostderr                                                             │ functional-546837 │ jenkins │ v1.37.0 │ 29 Oct 25 08:30 UTC │ 29 Oct 25 08:30 UTC │
	│ ssh     │ functional-546837 ssh sudo cat /usr/share/ca-certificates/45502.pem                                                                                       │ functional-546837 │ jenkins │ v1.37.0 │ 29 Oct 25 08:30 UTC │ 29 Oct 25 08:30 UTC │
	│ ssh     │ functional-546837 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-546837 │ jenkins │ v1.37.0 │ 29 Oct 25 08:30 UTC │ 29 Oct 25 08:30 UTC │
	│ image   │ functional-546837 image ls                                                                                                                                │ functional-546837 │ jenkins │ v1.37.0 │ 29 Oct 25 08:30 UTC │ 29 Oct 25 08:30 UTC │
	│ ssh     │ functional-546837 ssh sudo cat /etc/test/nested/copy/4550/hosts                                                                                           │ functional-546837 │ jenkins │ v1.37.0 │ 29 Oct 25 08:30 UTC │ 29 Oct 25 08:30 UTC │
	│ image   │ functional-546837 image load --daemon kicbase/echo-server:functional-546837 --alsologtostderr                                                             │ functional-546837 │ jenkins │ v1.37.0 │ 29 Oct 25 08:30 UTC │ 29 Oct 25 08:30 UTC │
	│ image   │ functional-546837 image ls                                                                                                                                │ functional-546837 │ jenkins │ v1.37.0 │ 29 Oct 25 08:30 UTC │ 29 Oct 25 08:30 UTC │
	│ image   │ functional-546837 image save kicbase/echo-server:functional-546837 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-546837 │ jenkins │ v1.37.0 │ 29 Oct 25 08:30 UTC │ 29 Oct 25 08:30 UTC │
	│ image   │ functional-546837 image rm kicbase/echo-server:functional-546837 --alsologtostderr                                                                        │ functional-546837 │ jenkins │ v1.37.0 │ 29 Oct 25 08:30 UTC │ 29 Oct 25 08:30 UTC │
	│ ssh     │ functional-546837 ssh echo hello                                                                                                                          │ functional-546837 │ jenkins │ v1.37.0 │ 29 Oct 25 08:30 UTC │ 29 Oct 25 08:30 UTC │
	│ image   │ functional-546837 image ls                                                                                                                                │ functional-546837 │ jenkins │ v1.37.0 │ 29 Oct 25 08:30 UTC │ 29 Oct 25 08:30 UTC │
	│ ssh     │ functional-546837 ssh cat /etc/hostname                                                                                                                   │ functional-546837 │ jenkins │ v1.37.0 │ 29 Oct 25 08:30 UTC │ 29 Oct 25 08:30 UTC │
	│ image   │ functional-546837 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-546837 │ jenkins │ v1.37.0 │ 29 Oct 25 08:30 UTC │ 29 Oct 25 08:30 UTC │
	│ tunnel  │ functional-546837 tunnel --alsologtostderr                                                                                                                │ functional-546837 │ jenkins │ v1.37.0 │ 29 Oct 25 08:30 UTC │                     │
	│ tunnel  │ functional-546837 tunnel --alsologtostderr                                                                                                                │ functional-546837 │ jenkins │ v1.37.0 │ 29 Oct 25 08:30 UTC │                     │
	│ image   │ functional-546837 image save --daemon kicbase/echo-server:functional-546837 --alsologtostderr                                                             │ functional-546837 │ jenkins │ v1.37.0 │ 29 Oct 25 08:30 UTC │ 29 Oct 25 08:30 UTC │
	│ tunnel  │ functional-546837 tunnel --alsologtostderr                                                                                                                │ functional-546837 │ jenkins │ v1.37.0 │ 29 Oct 25 08:30 UTC │                     │
	│ addons  │ functional-546837 addons list                                                                                                                             │ functional-546837 │ jenkins │ v1.37.0 │ 29 Oct 25 08:30 UTC │ 29 Oct 25 08:30 UTC │
	│ addons  │ functional-546837 addons list -o json                                                                                                                     │ functional-546837 │ jenkins │ v1.37.0 │ 29 Oct 25 08:30 UTC │ 29 Oct 25 08:30 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 08:29:36
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 08:29:36.518153   24649 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:29:36.518292   24649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:29:36.518296   24649 out.go:374] Setting ErrFile to fd 2...
	I1029 08:29:36.518300   24649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:29:36.518580   24649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 08:29:36.518957   24649 out.go:368] Setting JSON to false
	I1029 08:29:36.519862   24649 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":728,"bootTime":1761725848,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1029 08:29:36.519924   24649 start.go:143] virtualization:  
	I1029 08:29:36.523743   24649 out.go:179] * [functional-546837] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1029 08:29:36.527741   24649 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 08:29:36.527818   24649 notify.go:221] Checking for updates...
	I1029 08:29:36.533535   24649 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 08:29:36.536515   24649 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 08:29:36.539481   24649 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	I1029 08:29:36.542316   24649 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1029 08:29:36.545146   24649 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 08:29:36.548426   24649 config.go:182] Loaded profile config "functional-546837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:29:36.548521   24649 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 08:29:36.588949   24649 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1029 08:29:36.589069   24649 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:29:36.653432   24649 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-29 08:29:36.643761316 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 08:29:36.653527   24649 docker.go:319] overlay module found
	I1029 08:29:36.656597   24649 out.go:179] * Using the docker driver based on existing profile
	I1029 08:29:36.659402   24649 start.go:309] selected driver: docker
	I1029 08:29:36.659410   24649 start.go:930] validating driver "docker" against &{Name:functional-546837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-546837 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 08:29:36.659507   24649 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 08:29:36.659617   24649 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:29:36.722218   24649 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-29 08:29:36.713347043 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 08:29:36.722614   24649 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 08:29:36.722645   24649 cni.go:84] Creating CNI manager for ""
	I1029 08:29:36.722698   24649 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 08:29:36.722737   24649 start.go:353] cluster config:
	{Name:functional-546837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-546837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 08:29:36.727530   24649 out.go:179] * Starting "functional-546837" primary control-plane node in "functional-546837" cluster
	I1029 08:29:36.730283   24649 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 08:29:36.733149   24649 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 08:29:36.735845   24649 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:29:36.735890   24649 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1029 08:29:36.735901   24649 cache.go:59] Caching tarball of preloaded images
	I1029 08:29:36.735930   24649 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 08:29:36.735980   24649 preload.go:233] Found /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1029 08:29:36.735989   24649 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 08:29:36.736101   24649 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/config.json ...
	I1029 08:29:36.755675   24649 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 08:29:36.755687   24649 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 08:29:36.755703   24649 cache.go:233] Successfully downloaded all kic artifacts
	I1029 08:29:36.755725   24649 start.go:360] acquireMachinesLock for functional-546837: {Name:mk98225bd18e0442d0c86fc438324868c0a0bb11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 08:29:36.755780   24649 start.go:364] duration metric: took 39.525µs to acquireMachinesLock for "functional-546837"
	I1029 08:29:36.755798   24649 start.go:96] Skipping create...Using existing machine configuration
	I1029 08:29:36.755802   24649 fix.go:54] fixHost starting: 
	I1029 08:29:36.756087   24649 cli_runner.go:164] Run: docker container inspect functional-546837 --format={{.State.Status}}
	I1029 08:29:36.773236   24649 fix.go:112] recreateIfNeeded on functional-546837: state=Running err=<nil>
	W1029 08:29:36.773255   24649 fix.go:138] unexpected machine state, will restart: <nil>
	I1029 08:29:36.776436   24649 out.go:252] * Updating the running docker "functional-546837" container ...
	I1029 08:29:36.776460   24649 machine.go:94] provisionDockerMachine start ...
	I1029 08:29:36.776552   24649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546837
	I1029 08:29:36.794084   24649 main.go:143] libmachine: Using SSH client type: native
	I1029 08:29:36.794447   24649 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1029 08:29:36.794454   24649 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 08:29:36.943850   24649 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-546837
	
	I1029 08:29:36.943879   24649 ubuntu.go:182] provisioning hostname "functional-546837"
	I1029 08:29:36.943944   24649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546837
	I1029 08:29:36.961397   24649 main.go:143] libmachine: Using SSH client type: native
	I1029 08:29:36.961690   24649 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1029 08:29:36.961699   24649 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-546837 && echo "functional-546837" | sudo tee /etc/hostname
	I1029 08:29:37.125230   24649 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-546837
	
	I1029 08:29:37.125295   24649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546837
	I1029 08:29:37.143404   24649 main.go:143] libmachine: Using SSH client type: native
	I1029 08:29:37.143764   24649 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1029 08:29:37.143792   24649 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-546837' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-546837/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-546837' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 08:29:37.292494   24649 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 08:29:37.292508   24649 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-2763/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-2763/.minikube}
	I1029 08:29:37.292534   24649 ubuntu.go:190] setting up certificates
	I1029 08:29:37.292553   24649 provision.go:84] configureAuth start
	I1029 08:29:37.292620   24649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-546837
	I1029 08:29:37.310661   24649 provision.go:143] copyHostCerts
	I1029 08:29:37.310746   24649 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem, removing ...
	I1029 08:29:37.310764   24649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 08:29:37.310841   24649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem (1123 bytes)
	I1029 08:29:37.310946   24649 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem, removing ...
	I1029 08:29:37.310950   24649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 08:29:37.310976   24649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem (1679 bytes)
	I1029 08:29:37.311037   24649 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem, removing ...
	I1029 08:29:37.311040   24649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 08:29:37.311064   24649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem (1082 bytes)
	I1029 08:29:37.311139   24649 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem org=jenkins.functional-546837 san=[127.0.0.1 192.168.49.2 functional-546837 localhost minikube]
	I1029 08:29:37.895515   24649 provision.go:177] copyRemoteCerts
	I1029 08:29:37.895567   24649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 08:29:37.895611   24649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546837
	I1029 08:29:37.914278   24649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/functional-546837/id_rsa Username:docker}
	I1029 08:29:38.022134   24649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 08:29:38.047102   24649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1029 08:29:38.067687   24649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 08:29:38.088553   24649 provision.go:87] duration metric: took 795.978143ms to configureAuth
	I1029 08:29:38.088585   24649 ubuntu.go:206] setting minikube options for container-runtime
	I1029 08:29:38.088779   24649 config.go:182] Loaded profile config "functional-546837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:29:38.088881   24649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546837
	I1029 08:29:38.111863   24649 main.go:143] libmachine: Using SSH client type: native
	I1029 08:29:38.112170   24649 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1029 08:29:38.112182   24649 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 08:29:43.489487   24649 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 08:29:43.489500   24649 machine.go:97] duration metric: took 6.713033293s to provisionDockerMachine
	I1029 08:29:43.489509   24649 start.go:293] postStartSetup for "functional-546837" (driver="docker")
	I1029 08:29:43.489519   24649 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 08:29:43.489596   24649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 08:29:43.489634   24649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546837
	I1029 08:29:43.509807   24649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/functional-546837/id_rsa Username:docker}
	I1029 08:29:43.611951   24649 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 08:29:43.615213   24649 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 08:29:43.615232   24649 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 08:29:43.615241   24649 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/addons for local assets ...
	I1029 08:29:43.615293   24649 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/files for local assets ...
	I1029 08:29:43.615370   24649 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> 45502.pem in /etc/ssl/certs
	I1029 08:29:43.615440   24649 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/test/nested/copy/4550/hosts -> hosts in /etc/test/nested/copy/4550
	I1029 08:29:43.615483   24649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4550
	I1029 08:29:43.622782   24649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /etc/ssl/certs/45502.pem (1708 bytes)
	I1029 08:29:43.639603   24649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/test/nested/copy/4550/hosts --> /etc/test/nested/copy/4550/hosts (40 bytes)
	I1029 08:29:43.656873   24649 start.go:296] duration metric: took 167.350617ms for postStartSetup
	I1029 08:29:43.656941   24649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 08:29:43.656996   24649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546837
	I1029 08:29:43.673953   24649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/functional-546837/id_rsa Username:docker}
	I1029 08:29:43.773502   24649 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 08:29:43.778630   24649 fix.go:56] duration metric: took 7.022821036s for fixHost
	I1029 08:29:43.778649   24649 start.go:83] releasing machines lock for "functional-546837", held for 7.022857639s
	I1029 08:29:43.778745   24649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-546837
	I1029 08:29:43.795481   24649 ssh_runner.go:195] Run: cat /version.json
	I1029 08:29:43.795521   24649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 08:29:43.795524   24649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546837
	I1029 08:29:43.795589   24649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546837
	I1029 08:29:43.819892   24649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/functional-546837/id_rsa Username:docker}
	I1029 08:29:43.821305   24649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/functional-546837/id_rsa Username:docker}
	I1029 08:29:43.924054   24649 ssh_runner.go:195] Run: systemctl --version
	I1029 08:29:44.019384   24649 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 08:29:44.057869   24649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 08:29:44.062639   24649 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 08:29:44.062717   24649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 08:29:44.070988   24649 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1029 08:29:44.071003   24649 start.go:496] detecting cgroup driver to use...
	I1029 08:29:44.071037   24649 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1029 08:29:44.071105   24649 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 08:29:44.087551   24649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 08:29:44.101186   24649 docker.go:218] disabling cri-docker service (if available) ...
	I1029 08:29:44.101254   24649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 08:29:44.117459   24649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 08:29:44.130965   24649 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 08:29:44.271005   24649 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 08:29:44.413019   24649 docker.go:234] disabling docker service ...
	I1029 08:29:44.413084   24649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 08:29:44.428975   24649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 08:29:44.442081   24649 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 08:29:44.574291   24649 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 08:29:44.714266   24649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 08:29:44.726797   24649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 08:29:44.740956   24649 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 08:29:44.741019   24649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:29:44.750345   24649 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1029 08:29:44.750399   24649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:29:44.760190   24649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:29:44.769578   24649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:29:44.778471   24649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 08:29:44.786415   24649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:29:44.795055   24649 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:29:44.803227   24649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:29:44.811717   24649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 08:29:44.819226   24649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 08:29:44.827222   24649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:29:44.957146   24649 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 08:29:45.243531   24649 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 08:29:45.243601   24649 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 08:29:45.251815   24649 start.go:564] Will wait 60s for crictl version
	I1029 08:29:45.251881   24649 ssh_runner.go:195] Run: which crictl
	I1029 08:29:45.256860   24649 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 08:29:45.301859   24649 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 08:29:45.301968   24649 ssh_runner.go:195] Run: crio --version
	I1029 08:29:45.345790   24649 ssh_runner.go:195] Run: crio --version
	I1029 08:29:45.378284   24649 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 08:29:45.381201   24649 cli_runner.go:164] Run: docker network inspect functional-546837 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 08:29:45.399281   24649 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1029 08:29:45.406377   24649 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1029 08:29:45.409368   24649 kubeadm.go:884] updating cluster {Name:functional-546837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-546837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 08:29:45.409476   24649 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:29:45.409548   24649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 08:29:45.470092   24649 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 08:29:45.470103   24649 crio.go:433] Images already preloaded, skipping extraction
	I1029 08:29:45.470157   24649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 08:29:45.506983   24649 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 08:29:45.506994   24649 cache_images.go:86] Images are preloaded, skipping loading
	I1029 08:29:45.507001   24649 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1029 08:29:45.507100   24649 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-546837 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-546837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 08:29:45.507174   24649 ssh_runner.go:195] Run: crio config
	I1029 08:29:45.583565   24649 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1029 08:29:45.583584   24649 cni.go:84] Creating CNI manager for ""
	I1029 08:29:45.583592   24649 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 08:29:45.583601   24649 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1029 08:29:45.583621   24649 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-546837 NodeName:functional-546837 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 08:29:45.583741   24649 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-546837"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 08:29:45.583808   24649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 08:29:45.592128   24649 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 08:29:45.592199   24649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 08:29:45.600306   24649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1029 08:29:45.613934   24649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 08:29:45.626684   24649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1029 08:29:45.640499   24649 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1029 08:29:45.644059   24649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:29:45.782341   24649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 08:29:45.796227   24649 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837 for IP: 192.168.49.2
	I1029 08:29:45.796237   24649 certs.go:195] generating shared ca certs ...
	I1029 08:29:45.796251   24649 certs.go:227] acquiring lock for ca certs: {Name:mk715611293338c39436fe9072c5ce0c2fb993a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:29:45.796440   24649 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key
	I1029 08:29:45.796495   24649 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key
	I1029 08:29:45.796501   24649 certs.go:257] generating profile certs ...
	I1029 08:29:45.796586   24649 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/client.key
	I1029 08:29:45.796641   24649 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/apiserver.key.aec354bb
	I1029 08:29:45.796675   24649 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/proxy-client.key
	I1029 08:29:45.796793   24649 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem (1338 bytes)
	W1029 08:29:45.796817   24649 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550_empty.pem, impossibly tiny 0 bytes
	I1029 08:29:45.796824   24649 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem (1679 bytes)
	I1029 08:29:45.796848   24649 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem (1082 bytes)
	I1029 08:29:45.796867   24649 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem (1123 bytes)
	I1029 08:29:45.796888   24649 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem (1679 bytes)
	I1029 08:29:45.796927   24649 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem (1708 bytes)
	I1029 08:29:45.797503   24649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 08:29:45.816813   24649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 08:29:45.834924   24649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 08:29:45.853984   24649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1029 08:29:45.871984   24649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1029 08:29:45.891153   24649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1029 08:29:45.908795   24649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 08:29:45.926229   24649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1029 08:29:45.943044   24649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 08:29:45.960181   24649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem --> /usr/share/ca-certificates/4550.pem (1338 bytes)
	I1029 08:29:45.977006   24649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /usr/share/ca-certificates/45502.pem (1708 bytes)
	I1029 08:29:45.994363   24649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 08:29:46.009477   24649 ssh_runner.go:195] Run: openssl version
	I1029 08:29:46.016325   24649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 08:29:46.025256   24649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:29:46.029298   24649 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:29:46.029354   24649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:29:46.071078   24649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 08:29:46.079109   24649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4550.pem && ln -fs /usr/share/ca-certificates/4550.pem /etc/ssl/certs/4550.pem"
	I1029 08:29:46.087381   24649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4550.pem
	I1029 08:29:46.091128   24649 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:27 /usr/share/ca-certificates/4550.pem
	I1029 08:29:46.091180   24649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4550.pem
	I1029 08:29:46.136836   24649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4550.pem /etc/ssl/certs/51391683.0"
	I1029 08:29:46.144929   24649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/45502.pem && ln -fs /usr/share/ca-certificates/45502.pem /etc/ssl/certs/45502.pem"
	I1029 08:29:46.153087   24649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/45502.pem
	I1029 08:29:46.157009   24649 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:27 /usr/share/ca-certificates/45502.pem
	I1029 08:29:46.157063   24649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/45502.pem
	I1029 08:29:46.197936   24649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/45502.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 08:29:46.205682   24649 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 08:29:46.209445   24649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1029 08:29:46.259139   24649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1029 08:29:46.305792   24649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1029 08:29:46.348061   24649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1029 08:29:46.388883   24649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1029 08:29:46.430112   24649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1029 08:29:46.470880   24649 kubeadm.go:401] StartCluster: {Name:functional-546837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-546837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 08:29:46.470960   24649 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:29:46.471033   24649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:29:46.504228   24649 cri.go:89] found id: "1592a06d74b4089ca2987a6aef594183ab549bbcbb0620c79db0dda9eabfc52c"
	I1029 08:29:46.504239   24649 cri.go:89] found id: "86adea5055f603c42e90016eb4a3b7404343332090d8313ec5ff570124f65b0d"
	I1029 08:29:46.504242   24649 cri.go:89] found id: "b7103e9b47a876c36ed03a6e5ce905a116efbf95d94580bfbb86490c3899b106"
	I1029 08:29:46.504245   24649 cri.go:89] found id: "84e52c04af4fcf3f10bc0900f8f8a2c1c173bd87047bbb79850c48b145b76458"
	I1029 08:29:46.504248   24649 cri.go:89] found id: "cf30fe01da3ebfd7b4b4f70024b45ae0a9922f3835298a75350bde40f2d6b6e5"
	I1029 08:29:46.504251   24649 cri.go:89] found id: "ea69cd64a02e4f25e8e69dcd31a00ec6451b6c3777c416ba901c0ad196562582"
	I1029 08:29:46.504253   24649 cri.go:89] found id: "129cff3d569c0156c527868f827a9f17d9b6a013a431aedd88f3aef4d40da858"
	I1029 08:29:46.504255   24649 cri.go:89] found id: "a1cc7cfc9df2058dd4c955605fa83c4070b3a747b7bae54c9653f911f9c51ff9"
	I1029 08:29:46.504258   24649 cri.go:89] found id: ""
	I1029 08:29:46.504306   24649 ssh_runner.go:195] Run: sudo runc list -f json
	W1029 08:29:46.515339   24649 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:29:46Z" level=error msg="open /run/runc: no such file or directory"
	I1029 08:29:46.515407   24649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 08:29:46.523507   24649 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1029 08:29:46.523516   24649 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1029 08:29:46.523570   24649 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1029 08:29:46.531237   24649 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1029 08:29:46.531778   24649 kubeconfig.go:125] found "functional-546837" server: "https://192.168.49.2:8441"
	I1029 08:29:46.533120   24649 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1029 08:29:46.541415   24649 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-29 08:27:43.870465760 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-29 08:29:45.635751753 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1029 08:29:46.541425   24649 kubeadm.go:1161] stopping kube-system containers ...
	I1029 08:29:46.541436   24649 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1029 08:29:46.541492   24649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:29:46.569226   24649 cri.go:89] found id: "1592a06d74b4089ca2987a6aef594183ab549bbcbb0620c79db0dda9eabfc52c"
	I1029 08:29:46.569237   24649 cri.go:89] found id: "86adea5055f603c42e90016eb4a3b7404343332090d8313ec5ff570124f65b0d"
	I1029 08:29:46.569241   24649 cri.go:89] found id: "b7103e9b47a876c36ed03a6e5ce905a116efbf95d94580bfbb86490c3899b106"
	I1029 08:29:46.569243   24649 cri.go:89] found id: "84e52c04af4fcf3f10bc0900f8f8a2c1c173bd87047bbb79850c48b145b76458"
	I1029 08:29:46.569246   24649 cri.go:89] found id: "cf30fe01da3ebfd7b4b4f70024b45ae0a9922f3835298a75350bde40f2d6b6e5"
	I1029 08:29:46.569248   24649 cri.go:89] found id: "ea69cd64a02e4f25e8e69dcd31a00ec6451b6c3777c416ba901c0ad196562582"
	I1029 08:29:46.569251   24649 cri.go:89] found id: "129cff3d569c0156c527868f827a9f17d9b6a013a431aedd88f3aef4d40da858"
	I1029 08:29:46.569253   24649 cri.go:89] found id: "a1cc7cfc9df2058dd4c955605fa83c4070b3a747b7bae54c9653f911f9c51ff9"
	I1029 08:29:46.569255   24649 cri.go:89] found id: ""
	I1029 08:29:46.569259   24649 cri.go:252] Stopping containers: [1592a06d74b4089ca2987a6aef594183ab549bbcbb0620c79db0dda9eabfc52c 86adea5055f603c42e90016eb4a3b7404343332090d8313ec5ff570124f65b0d b7103e9b47a876c36ed03a6e5ce905a116efbf95d94580bfbb86490c3899b106 84e52c04af4fcf3f10bc0900f8f8a2c1c173bd87047bbb79850c48b145b76458 cf30fe01da3ebfd7b4b4f70024b45ae0a9922f3835298a75350bde40f2d6b6e5 ea69cd64a02e4f25e8e69dcd31a00ec6451b6c3777c416ba901c0ad196562582 129cff3d569c0156c527868f827a9f17d9b6a013a431aedd88f3aef4d40da858 a1cc7cfc9df2058dd4c955605fa83c4070b3a747b7bae54c9653f911f9c51ff9]
	I1029 08:29:46.569318   24649 ssh_runner.go:195] Run: which crictl
	I1029 08:29:46.573032   24649 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 1592a06d74b4089ca2987a6aef594183ab549bbcbb0620c79db0dda9eabfc52c 86adea5055f603c42e90016eb4a3b7404343332090d8313ec5ff570124f65b0d b7103e9b47a876c36ed03a6e5ce905a116efbf95d94580bfbb86490c3899b106 84e52c04af4fcf3f10bc0900f8f8a2c1c173bd87047bbb79850c48b145b76458 cf30fe01da3ebfd7b4b4f70024b45ae0a9922f3835298a75350bde40f2d6b6e5 ea69cd64a02e4f25e8e69dcd31a00ec6451b6c3777c416ba901c0ad196562582 129cff3d569c0156c527868f827a9f17d9b6a013a431aedd88f3aef4d40da858 a1cc7cfc9df2058dd4c955605fa83c4070b3a747b7bae54c9653f911f9c51ff9
	I1029 08:29:46.636299   24649 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1029 08:29:46.751406   24649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1029 08:29:46.759435   24649 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Oct 29 08:27 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct 29 08:27 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct 29 08:27 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct 29 08:27 /etc/kubernetes/scheduler.conf
	
	I1029 08:29:46.759489   24649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1029 08:29:46.767368   24649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1029 08:29:46.774821   24649 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1029 08:29:46.774876   24649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1029 08:29:46.782389   24649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1029 08:29:46.789858   24649 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1029 08:29:46.789909   24649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1029 08:29:46.797524   24649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1029 08:29:46.804795   24649 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1029 08:29:46.804853   24649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1029 08:29:46.811862   24649 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1029 08:29:46.819325   24649 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1029 08:29:46.865592   24649 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1029 08:29:50.354284   24649 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (3.488667329s)
	I1029 08:29:50.354353   24649 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1029 08:29:50.560649   24649 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1029 08:29:50.624717   24649 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1029 08:29:50.693932   24649 api_server.go:52] waiting for apiserver process to appear ...
	I1029 08:29:50.694006   24649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:29:51.194144   24649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:29:51.695088   24649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:29:51.709544   24649 api_server.go:72] duration metric: took 1.01562223s to wait for apiserver process to appear ...
	I1029 08:29:51.709558   24649 api_server.go:88] waiting for apiserver healthz status ...
	I1029 08:29:51.709575   24649 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1029 08:29:55.084718   24649 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1029 08:29:55.084734   24649 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1029 08:29:55.084746   24649 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1029 08:29:55.266563   24649 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 08:29:55.266580   24649 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1029 08:29:55.266597   24649 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1029 08:29:55.278978   24649 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 08:29:55.278997   24649 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1029 08:29:55.710604   24649 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1029 08:29:55.719577   24649 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 08:29:55.719593   24649 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1029 08:29:56.209801   24649 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1029 08:29:56.218567   24649 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 08:29:56.218583   24649 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1029 08:29:56.710520   24649 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1029 08:29:56.718948   24649 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1029 08:29:56.732419   24649 api_server.go:141] control plane version: v1.34.1
	I1029 08:29:56.732439   24649 api_server.go:131] duration metric: took 5.02287535s to wait for apiserver health ...
	I1029 08:29:56.732446   24649 cni.go:84] Creating CNI manager for ""
	I1029 08:29:56.732452   24649 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 08:29:56.735992   24649 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1029 08:29:56.738970   24649 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1029 08:29:56.742869   24649 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1029 08:29:56.742878   24649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1029 08:29:56.755850   24649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1029 08:29:57.232083   24649 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 08:29:57.237850   24649 system_pods.go:59] 8 kube-system pods found
	I1029 08:29:57.237871   24649 system_pods.go:61] "coredns-66bc5c9577-ln6tn" [d6f266d5-b6a7-49c0-bece-33c03ed86302] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 08:29:57.237879   24649 system_pods.go:61] "etcd-functional-546837" [bfedc85c-0861-4f0b-9f6b-0c9e572f40a6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 08:29:57.237884   24649 system_pods.go:61] "kindnet-6bwcp" [98fcf68b-24bc-401d-b8d7-2120d55f8c18] Running
	I1029 08:29:57.237890   24649 system_pods.go:61] "kube-apiserver-functional-546837" [48477f33-855a-4fb1-b06e-3cbf944d5667] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 08:29:57.237896   24649 system_pods.go:61] "kube-controller-manager-functional-546837" [f6509c17-585d-4b95-a169-bfddf73ddf68] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 08:29:57.237901   24649 system_pods.go:61] "kube-proxy-vrd4c" [ebe58ac6-7b4a-4e36-b4a6-aaa399384b9b] Running
	I1029 08:29:57.237907   24649 system_pods.go:61] "kube-scheduler-functional-546837" [80fc5a51-0bc3-40ae-b255-b4cd55ee4b73] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 08:29:57.237912   24649 system_pods.go:61] "storage-provisioner" [c6c461e3-ebd6-471d-b186-b61ae9d0600d] Running
	I1029 08:29:57.237918   24649 system_pods.go:74] duration metric: took 5.825628ms to wait for pod list to return data ...
	I1029 08:29:57.237923   24649 node_conditions.go:102] verifying NodePressure condition ...
	I1029 08:29:57.241939   24649 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1029 08:29:57.241971   24649 node_conditions.go:123] node cpu capacity is 2
	I1029 08:29:57.241981   24649 node_conditions.go:105] duration metric: took 4.054886ms to run NodePressure ...
	I1029 08:29:57.242061   24649 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1029 08:29:57.503250   24649 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1029 08:29:57.506785   24649 kubeadm.go:744] kubelet initialised
	I1029 08:29:57.506796   24649 kubeadm.go:745] duration metric: took 3.533592ms waiting for restarted kubelet to initialise ...
	I1029 08:29:57.506811   24649 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1029 08:29:57.516623   24649 ops.go:34] apiserver oom_adj: -16
	I1029 08:29:57.516635   24649 kubeadm.go:602] duration metric: took 10.993114064s to restartPrimaryControlPlane
	I1029 08:29:57.516643   24649 kubeadm.go:403] duration metric: took 11.045772828s to StartCluster
	I1029 08:29:57.516658   24649 settings.go:142] acquiring lock: {Name:mkeba1c0d8e3656c2a05b6b6f1f81184498df216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:29:57.516720   24649 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 08:29:57.517394   24649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:29:57.517584   24649 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 08:29:57.517844   24649 config.go:182] Loaded profile config "functional-546837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:29:57.517880   24649 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 08:29:57.517985   24649 addons.go:70] Setting storage-provisioner=true in profile "functional-546837"
	I1029 08:29:57.517998   24649 addons.go:239] Setting addon storage-provisioner=true in "functional-546837"
	W1029 08:29:57.518002   24649 addons.go:248] addon storage-provisioner should already be in state true
	I1029 08:29:57.518014   24649 addons.go:70] Setting default-storageclass=true in profile "functional-546837"
	I1029 08:29:57.518022   24649 host.go:66] Checking if "functional-546837" exists ...
	I1029 08:29:57.518026   24649 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-546837"
	I1029 08:29:57.518341   24649 cli_runner.go:164] Run: docker container inspect functional-546837 --format={{.State.Status}}
	I1029 08:29:57.518464   24649 cli_runner.go:164] Run: docker container inspect functional-546837 --format={{.State.Status}}
	I1029 08:29:57.520787   24649 out.go:179] * Verifying Kubernetes components...
	I1029 08:29:57.523730   24649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:29:57.550030   24649 addons.go:239] Setting addon default-storageclass=true in "functional-546837"
	W1029 08:29:57.550043   24649 addons.go:248] addon default-storageclass should already be in state true
	I1029 08:29:57.550092   24649 host.go:66] Checking if "functional-546837" exists ...
	I1029 08:29:57.550712   24649 cli_runner.go:164] Run: docker container inspect functional-546837 --format={{.State.Status}}
	I1029 08:29:57.562504   24649 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 08:29:57.565993   24649 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 08:29:57.566005   24649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1029 08:29:57.566069   24649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546837
	I1029 08:29:57.569123   24649 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1029 08:29:57.569135   24649 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1029 08:29:57.569190   24649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546837
	I1029 08:29:57.606665   24649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/functional-546837/id_rsa Username:docker}
	I1029 08:29:57.609695   24649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/functional-546837/id_rsa Username:docker}
	I1029 08:29:57.748874   24649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 08:29:57.758521   24649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 08:29:57.766613   24649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1029 08:29:57.771738   24649 node_ready.go:35] waiting up to 6m0s for node "functional-546837" to be "Ready" ...
	I1029 08:29:57.774772   24649 node_ready.go:49] node "functional-546837" is "Ready"
	I1029 08:29:57.774799   24649 node_ready.go:38] duration metric: took 3.041944ms for node "functional-546837" to be "Ready" ...
	I1029 08:29:57.774811   24649 api_server.go:52] waiting for apiserver process to appear ...
	I1029 08:29:57.774871   24649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:29:58.607305   24649 api_server.go:72] duration metric: took 1.089697318s to wait for apiserver process to appear ...
	I1029 08:29:58.607317   24649 api_server.go:88] waiting for apiserver healthz status ...
	I1029 08:29:58.607332   24649 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1029 08:29:58.619766   24649 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1029 08:29:58.620888   24649 api_server.go:141] control plane version: v1.34.1
	I1029 08:29:58.620902   24649 api_server.go:131] duration metric: took 13.579447ms to wait for apiserver health ...
	I1029 08:29:58.620909   24649 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 08:29:58.621100   24649 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1029 08:29:58.624124   24649 system_pods.go:59] 8 kube-system pods found
	I1029 08:29:58.624142   24649 system_pods.go:61] "coredns-66bc5c9577-ln6tn" [d6f266d5-b6a7-49c0-bece-33c03ed86302] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 08:29:58.624150   24649 system_pods.go:61] "etcd-functional-546837" [bfedc85c-0861-4f0b-9f6b-0c9e572f40a6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 08:29:58.624155   24649 system_pods.go:61] "kindnet-6bwcp" [98fcf68b-24bc-401d-b8d7-2120d55f8c18] Running
	I1029 08:29:58.624161   24649 system_pods.go:61] "kube-apiserver-functional-546837" [48477f33-855a-4fb1-b06e-3cbf944d5667] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 08:29:58.624167   24649 system_pods.go:61] "kube-controller-manager-functional-546837" [f6509c17-585d-4b95-a169-bfddf73ddf68] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 08:29:58.624171   24649 system_pods.go:61] "kube-proxy-vrd4c" [ebe58ac6-7b4a-4e36-b4a6-aaa399384b9b] Running
	I1029 08:29:58.624177   24649 system_pods.go:61] "kube-scheduler-functional-546837" [80fc5a51-0bc3-40ae-b255-b4cd55ee4b73] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 08:29:58.624180   24649 system_pods.go:61] "storage-provisioner" [c6c461e3-ebd6-471d-b186-b61ae9d0600d] Running
	I1029 08:29:58.624185   24649 system_pods.go:74] duration metric: took 3.272264ms to wait for pod list to return data ...
	I1029 08:29:58.624191   24649 default_sa.go:34] waiting for default service account to be created ...
	I1029 08:29:58.624434   24649 addons.go:515] duration metric: took 1.106554626s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1029 08:29:58.626716   24649 default_sa.go:45] found service account: "default"
	I1029 08:29:58.626734   24649 default_sa.go:55] duration metric: took 2.531833ms for default service account to be created ...
	I1029 08:29:58.626742   24649 system_pods.go:116] waiting for k8s-apps to be running ...
	I1029 08:29:58.629892   24649 system_pods.go:86] 8 kube-system pods found
	I1029 08:29:58.629907   24649 system_pods.go:89] "coredns-66bc5c9577-ln6tn" [d6f266d5-b6a7-49c0-bece-33c03ed86302] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 08:29:58.629914   24649 system_pods.go:89] "etcd-functional-546837" [bfedc85c-0861-4f0b-9f6b-0c9e572f40a6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 08:29:58.629919   24649 system_pods.go:89] "kindnet-6bwcp" [98fcf68b-24bc-401d-b8d7-2120d55f8c18] Running
	I1029 08:29:58.629924   24649 system_pods.go:89] "kube-apiserver-functional-546837" [48477f33-855a-4fb1-b06e-3cbf944d5667] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 08:29:58.629930   24649 system_pods.go:89] "kube-controller-manager-functional-546837" [f6509c17-585d-4b95-a169-bfddf73ddf68] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 08:29:58.629933   24649 system_pods.go:89] "kube-proxy-vrd4c" [ebe58ac6-7b4a-4e36-b4a6-aaa399384b9b] Running
	I1029 08:29:58.629938   24649 system_pods.go:89] "kube-scheduler-functional-546837" [80fc5a51-0bc3-40ae-b255-b4cd55ee4b73] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 08:29:58.629941   24649 system_pods.go:89] "storage-provisioner" [c6c461e3-ebd6-471d-b186-b61ae9d0600d] Running
	I1029 08:29:58.629946   24649 system_pods.go:126] duration metric: took 3.200345ms to wait for k8s-apps to be running ...
	I1029 08:29:58.629951   24649 system_svc.go:44] waiting for kubelet service to be running ....
	I1029 08:29:58.630006   24649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 08:29:58.647576   24649 system_svc.go:56] duration metric: took 17.614993ms WaitForService to wait for kubelet
	I1029 08:29:58.647593   24649 kubeadm.go:587] duration metric: took 1.129988469s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 08:29:58.647613   24649 node_conditions.go:102] verifying NodePressure condition ...
	I1029 08:29:58.657835   24649 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1029 08:29:58.657851   24649 node_conditions.go:123] node cpu capacity is 2
	I1029 08:29:58.657861   24649 node_conditions.go:105] duration metric: took 10.243158ms to run NodePressure ...
	I1029 08:29:58.657872   24649 start.go:242] waiting for startup goroutines ...
	I1029 08:29:58.657878   24649 start.go:247] waiting for cluster config update ...
	I1029 08:29:58.657888   24649 start.go:256] writing updated cluster config ...
	I1029 08:29:58.658166   24649 ssh_runner.go:195] Run: rm -f paused
	I1029 08:29:58.661988   24649 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 08:29:58.724951   24649 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ln6tn" in "kube-system" namespace to be "Ready" or be gone ...
	W1029 08:30:00.730779   24649 pod_ready.go:104] pod "coredns-66bc5c9577-ln6tn" is not "Ready", error: <nil>
	W1029 08:30:03.231212   24649 pod_ready.go:104] pod "coredns-66bc5c9577-ln6tn" is not "Ready", error: <nil>
	I1029 08:30:04.230458   24649 pod_ready.go:94] pod "coredns-66bc5c9577-ln6tn" is "Ready"
	I1029 08:30:04.230472   24649 pod_ready.go:86] duration metric: took 5.505508216s for pod "coredns-66bc5c9577-ln6tn" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:30:04.233121   24649 pod_ready.go:83] waiting for pod "etcd-functional-546837" in "kube-system" namespace to be "Ready" or be gone ...
	W1029 08:30:06.239009   24649 pod_ready.go:104] pod "etcd-functional-546837" is not "Ready", error: <nil>
	I1029 08:30:08.239067   24649 pod_ready.go:94] pod "etcd-functional-546837" is "Ready"
	I1029 08:30:08.239080   24649 pod_ready.go:86] duration metric: took 4.005946205s for pod "etcd-functional-546837" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:30:08.241571   24649 pod_ready.go:83] waiting for pod "kube-apiserver-functional-546837" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:30:08.246338   24649 pod_ready.go:94] pod "kube-apiserver-functional-546837" is "Ready"
	I1029 08:30:08.246352   24649 pod_ready.go:86] duration metric: took 4.767853ms for pod "kube-apiserver-functional-546837" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:30:08.248731   24649 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-546837" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:30:08.253802   24649 pod_ready.go:94] pod "kube-controller-manager-functional-546837" is "Ready"
	I1029 08:30:08.253817   24649 pod_ready.go:86] duration metric: took 5.073128ms for pod "kube-controller-manager-functional-546837" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:30:08.256131   24649 pod_ready.go:83] waiting for pod "kube-proxy-vrd4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:30:08.437204   24649 pod_ready.go:94] pod "kube-proxy-vrd4c" is "Ready"
	I1029 08:30:08.437218   24649 pod_ready.go:86] duration metric: took 181.074255ms for pod "kube-proxy-vrd4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:30:08.636094   24649 pod_ready.go:83] waiting for pod "kube-scheduler-functional-546837" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:30:09.037634   24649 pod_ready.go:94] pod "kube-scheduler-functional-546837" is "Ready"
	I1029 08:30:09.037659   24649 pod_ready.go:86] duration metric: took 401.551496ms for pod "kube-scheduler-functional-546837" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:30:09.037671   24649 pod_ready.go:40] duration metric: took 10.375662721s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 08:30:09.095828   24649 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1029 08:30:09.098939   24649 out.go:179] * Done! kubectl is now configured to use "functional-546837" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 29 08:30:50 functional-546837 crio[3714]: time="2025-10-29T08:30:50.67151058Z" level=info msg="Stopped pod sandbox (already stopped): c1058641cd2e6786fc3b1622583112709be8ef370fe1b04fd4e93c54bcb5b582" id=519d9ddc-7649-4c1d-9b40-00a9ad860f6f name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 29 08:30:50 functional-546837 crio[3714]: time="2025-10-29T08:30:50.671842776Z" level=info msg="Removing pod sandbox: c1058641cd2e6786fc3b1622583112709be8ef370fe1b04fd4e93c54bcb5b582" id=675a3d68-dba6-4a02-b9c7-c244312179a4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 29 08:30:50 functional-546837 crio[3714]: time="2025-10-29T08:30:50.675405455Z" level=info msg="Removed pod sandbox: c1058641cd2e6786fc3b1622583112709be8ef370fe1b04fd4e93c54bcb5b582" id=675a3d68-dba6-4a02-b9c7-c244312179a4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 29 08:30:50 functional-546837 crio[3714]: time="2025-10-29T08:30:50.67601973Z" level=info msg="Stopping pod sandbox: 3e4b8f52ecbc6d0b180fdc3e40d834109d80a601b69e56c69333fd35974267ed" id=e5495df8-fc5b-4561-bcbf-e938822a353b name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 29 08:30:50 functional-546837 crio[3714]: time="2025-10-29T08:30:50.676072481Z" level=info msg="Stopped pod sandbox (already stopped): 3e4b8f52ecbc6d0b180fdc3e40d834109d80a601b69e56c69333fd35974267ed" id=e5495df8-fc5b-4561-bcbf-e938822a353b name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 29 08:30:50 functional-546837 crio[3714]: time="2025-10-29T08:30:50.676487443Z" level=info msg="Removing pod sandbox: 3e4b8f52ecbc6d0b180fdc3e40d834109d80a601b69e56c69333fd35974267ed" id=a0fea339-6526-471c-b5ec-fe941b5980d3 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 29 08:30:50 functional-546837 crio[3714]: time="2025-10-29T08:30:50.680180011Z" level=info msg="Removed pod sandbox: 3e4b8f52ecbc6d0b180fdc3e40d834109d80a601b69e56c69333fd35974267ed" id=a0fea339-6526-471c-b5ec-fe941b5980d3 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 29 08:30:50 functional-546837 crio[3714]: time="2025-10-29T08:30:50.690541258Z" level=info msg="Running pod sandbox: default/hello-node-75c85bcc94-q27vz/POD" id=a9206f67-9fd3-4691-a362-45fedfcd2301 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 08:30:50 functional-546837 crio[3714]: time="2025-10-29T08:30:50.690607606Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 08:30:50 functional-546837 crio[3714]: time="2025-10-29T08:30:50.697102885Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-q27vz Namespace:default ID:8d0ddfd7e704aea834a73728e8a86e0df187acf03b268e3031944c1803738477 UID:e223ad9f-119f-4952-974f-39d8762fcb5e NetNS:/var/run/netns/e482b030-c737-4fdb-9cec-a69baee7703a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079af0}] Aliases:map[]}"
	Oct 29 08:30:50 functional-546837 crio[3714]: time="2025-10-29T08:30:50.697144887Z" level=info msg="Adding pod default_hello-node-75c85bcc94-q27vz to CNI network \"kindnet\" (type=ptp)"
	Oct 29 08:30:50 functional-546837 crio[3714]: time="2025-10-29T08:30:50.704966055Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8c812651-221c-4183-9ec3-772461424925 name=/runtime.v1.ImageService/PullImage
	Oct 29 08:30:50 functional-546837 crio[3714]: time="2025-10-29T08:30:50.722638073Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-q27vz Namespace:default ID:8d0ddfd7e704aea834a73728e8a86e0df187acf03b268e3031944c1803738477 UID:e223ad9f-119f-4952-974f-39d8762fcb5e NetNS:/var/run/netns/e482b030-c737-4fdb-9cec-a69baee7703a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079af0}] Aliases:map[]}"
	Oct 29 08:30:50 functional-546837 crio[3714]: time="2025-10-29T08:30:50.72280308Z" level=info msg="Checking pod default_hello-node-75c85bcc94-q27vz for CNI network kindnet (type=ptp)"
	Oct 29 08:30:50 functional-546837 crio[3714]: time="2025-10-29T08:30:50.726517769Z" level=info msg="Ran pod sandbox 8d0ddfd7e704aea834a73728e8a86e0df187acf03b268e3031944c1803738477 with infra container: default/hello-node-75c85bcc94-q27vz/POD" id=a9206f67-9fd3-4691-a362-45fedfcd2301 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 08:30:50 functional-546837 crio[3714]: time="2025-10-29T08:30:50.727751267Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=65d998e6-e2db-463b-a325-1587d249e2a3 name=/runtime.v1.ImageService/PullImage
	Oct 29 08:31:01 functional-546837 crio[3714]: time="2025-10-29T08:31:01.704391334Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b413fd7a-e654-46ff-ba51-5ee46d0df2c0 name=/runtime.v1.ImageService/PullImage
	Oct 29 08:31:15 functional-546837 crio[3714]: time="2025-10-29T08:31:15.704764212Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a3c95092-e183-4a51-8e89-204745325541 name=/runtime.v1.ImageService/PullImage
	Oct 29 08:31:30 functional-546837 crio[3714]: time="2025-10-29T08:31:30.70504142Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=1c5232d9-e199-408f-b08f-a06610d96606 name=/runtime.v1.ImageService/PullImage
	Oct 29 08:32:05 functional-546837 crio[3714]: time="2025-10-29T08:32:05.703736913Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=6c1210b1-9076-47db-9d57-80a2a1498193 name=/runtime.v1.ImageService/PullImage
	Oct 29 08:32:13 functional-546837 crio[3714]: time="2025-10-29T08:32:13.70424998Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=ec6235ae-3505-4bf3-8df5-3a0f0e83a697 name=/runtime.v1.ImageService/PullImage
	Oct 29 08:33:31 functional-546837 crio[3714]: time="2025-10-29T08:33:31.703984885Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=0f432e5f-1d25-477c-880c-4d15576f5245 name=/runtime.v1.ImageService/PullImage
	Oct 29 08:33:35 functional-546837 crio[3714]: time="2025-10-29T08:33:35.704431258Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=489d9a7c-c1b0-4bfa-aef6-2ce363420f47 name=/runtime.v1.ImageService/PullImage
	Oct 29 08:36:19 functional-546837 crio[3714]: time="2025-10-29T08:36:19.703794158Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=efc0da7e-0ffe-4b7f-bff1-9f92d37fbd38 name=/runtime.v1.ImageService/PullImage
	Oct 29 08:36:21 functional-546837 crio[3714]: time="2025-10-29T08:36:21.704185471Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=db8d191f-0ffb-478d-b322-251103500073 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	42cae4bace5bb       docker.io/library/nginx@sha256:89a1bafe028b2980994d974115ee7268ef851a6eb7c9cb9626d8035b08ba4424   9 minutes ago       Running             myfrontend                0                   bac2955a63b20       sp-pod                                      default
	f92f15720a876       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90   10 minutes ago      Running             nginx                     0                   ec1bb7f64f7ad       nginx-svc                                   default
	735edaef75aca       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                3                   a9599a24fc949       kube-proxy-vrd4c                            kube-system
	3a696b9ccf1f3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               3                   663c043365b6d       kindnet-6bwcp                               kube-system
	e6ca0f36129ab       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   3                   fe943f82686c1       coredns-66bc5c9577-ln6tn                    kube-system
	ff4750ab5dd7e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       3                   729116f1b6cb6       storage-provisioner                         kube-system
	95c40998456d9       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   3ce7c31b78402       kube-apiserver-functional-546837            kube-system
	e929c2bf2e22e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            3                   9fd97a01df21e       kube-scheduler-functional-546837            kube-system
	b582e6e0dd43f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   3                   a377be0961613       kube-controller-manager-functional-546837   kube-system
	22625acb0249e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      3                   a26867463ce91       etcd-functional-546837                      kube-system
	1592a06d74b40       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   2                   fe943f82686c1       coredns-66bc5c9577-ln6tn                    kube-system
	86adea5055f60       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Exited              kube-proxy                2                   a9599a24fc949       kube-proxy-vrd4c                            kube-system
	b7103e9b47a87       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  11 minutes ago      Exited              kube-controller-manager   2                   a377be0961613       kube-controller-manager-functional-546837   kube-system
	84e52c04af4fc       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Exited              kube-scheduler            2                   9fd97a01df21e       kube-scheduler-functional-546837            kube-system
	ea69cd64a02e4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  11 minutes ago      Exited              storage-provisioner       2                   729116f1b6cb6       storage-provisioner                         kube-system
	129cff3d569c0       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      2                   a26867463ce91       etcd-functional-546837                      kube-system
	a1cc7cfc9df20       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               2                   663c043365b6d       kindnet-6bwcp                               kube-system
	
	
	==> coredns [1592a06d74b4089ca2987a6aef594183ab549bbcbb0620c79db0dda9eabfc52c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47512 - 2381 "HINFO IN 5054076620661269911.2639949674882553547. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022662793s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e6ca0f36129abdc7e90b2fee5212bcf5ab68bc446db051764a59c0f7f8cd217d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54307 - 58423 "HINFO IN 2989960771587352535.2033220502314627571. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033896139s
	
	
	==> describe nodes <==
	Name:               functional-546837
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-546837
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=functional-546837
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T08_27_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 08:27:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-546837
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 08:40:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 08:40:07 +0000   Wed, 29 Oct 2025 08:27:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 08:40:07 +0000   Wed, 29 Oct 2025 08:27:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 08:40:07 +0000   Wed, 29 Oct 2025 08:27:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 08:40:07 +0000   Wed, 29 Oct 2025 08:28:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-546837
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                08212f40-8599-49a7-85a0-d66cc4693d32
	  Boot ID:                    dcb5a4aa-84b0-4edf-aeb7-a96ccf6c0882
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-q27vz                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	  default                     hello-node-connect-7d85dfc575-r8mwc          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m53s
	  kube-system                 coredns-66bc5c9577-ln6tn                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-546837                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-6bwcp                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-546837             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-546837    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-vrd4c                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-546837             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-546837 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-546837 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-546837 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-546837 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-546837 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-546837 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node functional-546837 event: Registered Node functional-546837 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-546837 status is now: NodeReady
	  Warning  ContainerGCFailed        11m                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           11m                node-controller  Node functional-546837 event: Registered Node functional-546837 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-546837 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-546837 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-546837 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-546837 event: Registered Node functional-546837 in Controller
	
	
	==> dmesg <==
	[Oct29 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014848] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.520802] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035216] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.815569] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.730396] kauditd_printk_skb: 36 callbacks suppressed
	[Oct29 08:19] kauditd_printk_skb: 8 callbacks suppressed
	[Oct29 08:21] overlayfs: idmapped layers are currently not supported
	[  +0.080642] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct29 08:26] overlayfs: idmapped layers are currently not supported
	[Oct29 08:27] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [129cff3d569c0156c527868f827a9f17d9b6a013a431aedd88f3aef4d40da858] <==
	{"level":"warn","ts":"2025-10-29T08:29:13.032725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:29:13.049278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:29:13.115680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:29:13.126091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:29:13.141721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:29:13.158770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:29:13.243160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56598","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-29T08:29:38.282712Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-29T08:29:38.282782Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-546837","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-29T08:29:38.282915Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-29T08:29:38.285712Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-29T08:29:38.432098Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-29T08:29:38.432174Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-29T08:29:38.432221Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-29T08:29:38.432232Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-29T08:29:38.432161Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-29T08:29:38.432364Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-29T08:29:38.432399Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-29T08:29:38.432494Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-29T08:29:38.432515Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-29T08:29:38.432524Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-29T08:29:38.436200Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-29T08:29:38.436280Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-29T08:29:38.436424Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-29T08:29:38.436453Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-546837","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [22625acb0249ebd74265538a0282db67aa585b1dd9d99fde4ae88f9fc1fc322a] <==
	{"level":"warn","ts":"2025-10-29T08:29:54.042313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:29:54.058453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:29:54.076720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:29:54.098866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:29:54.116840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:29:54.134612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:29:54.152791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:29:54.182660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:29:54.188219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:29:54.209677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:29:54.223508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:29:54.240620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:29:54.256893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:29:54.272993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:29:54.291286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:29:54.309136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:29:54.326516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:29:54.362066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:29:54.383670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:29:54.397607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:29:54.411635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:29:54.479151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38362","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-29T08:39:52.962587Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1164}
	{"level":"info","ts":"2025-10-29T08:39:52.987007Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1164,"took":"24.115539ms","hash":3609462026,"current-db-size-bytes":3387392,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1560576,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-10-29T08:39:52.987073Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3609462026,"revision":1164,"compact-revision":-1}
	
	
	==> kernel <==
	 08:40:36 up 23 min,  0 user,  load average: 0.12, 0.34, 0.57
	Linux functional-546837 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3a696b9ccf1f37171b721a93e866c7494c9c2eb0925a529f0c8af970a2446b5f] <==
	I1029 08:38:36.354415       1 main.go:301] handling current node
	I1029 08:38:46.354361       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:38:46.354412       1 main.go:301] handling current node
	I1029 08:38:56.354096       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:38:56.354220       1 main.go:301] handling current node
	I1029 08:39:06.354317       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:39:06.354484       1 main.go:301] handling current node
	I1029 08:39:16.361386       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:39:16.361421       1 main.go:301] handling current node
	I1029 08:39:26.358484       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:39:26.358519       1 main.go:301] handling current node
	I1029 08:39:36.355280       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:39:36.355315       1 main.go:301] handling current node
	I1029 08:39:46.360654       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:39:46.360691       1 main.go:301] handling current node
	I1029 08:39:56.355784       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:39:56.355891       1 main.go:301] handling current node
	I1029 08:40:06.354375       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:40:06.354430       1 main.go:301] handling current node
	I1029 08:40:16.354381       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:40:16.354436       1 main.go:301] handling current node
	I1029 08:40:26.354290       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:40:26.354325       1 main.go:301] handling current node
	I1029 08:40:36.356601       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:40:36.356634       1 main.go:301] handling current node
	
	
	==> kindnet [a1cc7cfc9df2058dd4c955605fa83c4070b3a747b7bae54c9653f911f9c51ff9] <==
	I1029 08:29:09.353033       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 08:29:09.353245       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1029 08:29:09.353360       1 main.go:148] setting mtu 1500 for CNI 
	I1029 08:29:09.353371       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 08:29:09.353383       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T08:29:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	E1029 08:29:09.650072       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1029 08:29:09.650499       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 08:29:09.650511       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 08:29:09.650520       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 08:29:09.650934       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1029 08:29:09.651090       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1029 08:29:09.651201       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1029 08:29:09.651638       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1029 08:29:14.252621       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 08:29:14.252801       1 metrics.go:72] Registering metrics
	I1029 08:29:14.253056       1 controller.go:711] "Syncing nftables rules"
	I1029 08:29:19.649989       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:29:19.650079       1 main.go:301] handling current node
	I1029 08:29:29.650338       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:29:29.650373       1 main.go:301] handling current node
	
	
	==> kube-apiserver [95c40998456d9fb764933180ba7efb8c73d7338edd8dbadc3494ca395746294d] <==
	I1029 08:29:55.244146       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1029 08:29:55.251560       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 08:29:55.256266       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1029 08:29:55.256399       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1029 08:29:55.258601       1 aggregator.go:171] initial CRD sync complete...
	I1029 08:29:55.258630       1 autoregister_controller.go:144] Starting autoregister controller
	I1029 08:29:55.258637       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1029 08:29:55.258644       1 cache.go:39] Caches are synced for autoregister controller
	E1029 08:29:55.280636       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1029 08:29:55.769631       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 08:29:56.008729       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 08:29:57.224916       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1029 08:29:57.372640       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1029 08:29:57.452137       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 08:29:57.458934       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 08:29:58.644537       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1029 08:29:58.888806       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1029 08:29:58.937882       1 controller.go:667] quota admission added evaluator for: endpoints
	I1029 08:30:12.439754       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.43.48"}
	I1029 08:30:24.953177       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.97.164.90"}
	I1029 08:30:34.627638       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.111.163"}
	E1029 08:30:42.320524       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:42882: use of closed network connection
	E1029 08:30:50.258924       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:51374: use of closed network connection
	I1029 08:30:50.469963       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.79.226"}
	I1029 08:39:55.168331       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [b582e6e0dd43f2c27280e38492c83aa103a8d74b506252f721bfd665e1eb8a98] <==
	I1029 08:29:58.583214       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1029 08:29:58.583294       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1029 08:29:58.583313       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1029 08:29:58.583339       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1029 08:29:58.583364       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1029 08:29:58.583369       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1029 08:29:58.583373       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1029 08:29:58.586558       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1029 08:29:58.586612       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1029 08:29:58.588717       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1029 08:29:58.589687       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1029 08:29:58.592544       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1029 08:29:58.593161       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 08:29:58.605916       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1029 08:29:58.608238       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1029 08:29:58.608414       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1029 08:29:58.608500       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-546837"
	I1029 08:29:58.608548       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1029 08:29:58.611129       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1029 08:29:58.615820       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1029 08:29:58.619790       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 08:29:58.638252       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 08:29:58.638274       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1029 08:29:58.638280       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1029 08:29:58.639091       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	
	
	==> kube-controller-manager [b7103e9b47a876c36ed03a6e5ce905a116efbf95d94580bfbb86490c3899b106] <==
	I1029 08:29:17.574109       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1029 08:29:17.576243       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1029 08:29:17.577314       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1029 08:29:17.579218       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1029 08:29:17.579322       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1029 08:29:17.580559       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 08:29:17.581650       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1029 08:29:17.583763       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1029 08:29:17.583829       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1029 08:29:17.588541       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1029 08:29:17.588665       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1029 08:29:17.588731       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1029 08:29:17.588869       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1029 08:29:17.591249       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1029 08:29:17.591498       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1029 08:29:17.602723       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 08:29:17.613025       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1029 08:29:17.613128       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1029 08:29:17.613157       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1029 08:29:17.613163       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1029 08:29:17.613169       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1029 08:29:17.617196       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1029 08:29:17.617665       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1029 08:29:17.617718       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1029 08:29:17.617729       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	
	
	==> kube-proxy [735edaef75aca44a33bc451b3b7e3b1919e1b15896bc75a6325571885fc780d4] <==
	I1029 08:29:56.232007       1 server_linux.go:53] "Using iptables proxy"
	I1029 08:29:56.362295       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 08:29:56.464046       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 08:29:56.464083       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1029 08:29:56.464173       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 08:29:56.487569       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 08:29:56.487629       1 server_linux.go:132] "Using iptables Proxier"
	I1029 08:29:56.494471       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 08:29:56.495043       1 server.go:527] "Version info" version="v1.34.1"
	I1029 08:29:56.495071       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 08:29:56.496548       1 config.go:200] "Starting service config controller"
	I1029 08:29:56.496569       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 08:29:56.496588       1 config.go:106] "Starting endpoint slice config controller"
	I1029 08:29:56.496593       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 08:29:56.496602       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 08:29:56.496606       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 08:29:56.498406       1 config.go:309] "Starting node config controller"
	I1029 08:29:56.498424       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 08:29:56.498431       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 08:29:56.597372       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 08:29:56.597384       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 08:29:56.597401       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [86adea5055f603c42e90016eb4a3b7404343332090d8313ec5ff570124f65b0d] <==
	I1029 08:29:12.052125       1 server_linux.go:53] "Using iptables proxy"
	I1029 08:29:12.557318       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 08:29:14.293949       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 08:29:14.302088       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1029 08:29:14.322213       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 08:29:14.792580       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 08:29:14.792650       1 server_linux.go:132] "Using iptables Proxier"
	I1029 08:29:15.012576       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 08:29:15.052621       1 server.go:527] "Version info" version="v1.34.1"
	I1029 08:29:15.060391       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 08:29:15.061783       1 config.go:200] "Starting service config controller"
	I1029 08:29:15.061822       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 08:29:15.061843       1 config.go:106] "Starting endpoint slice config controller"
	I1029 08:29:15.061848       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 08:29:15.061862       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 08:29:15.061866       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 08:29:15.062636       1 config.go:309] "Starting node config controller"
	I1029 08:29:15.062662       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 08:29:15.062669       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 08:29:15.164400       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 08:29:15.164440       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 08:29:15.164500       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [84e52c04af4fcf3f10bc0900f8f8a2c1c173bd87047bbb79850c48b145b76458] <==
	I1029 08:29:14.188640       1 serving.go:386] Generated self-signed cert in-memory
	I1029 08:29:15.341903       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1029 08:29:15.342001       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 08:29:15.348291       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1029 08:29:15.348482       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1029 08:29:15.348798       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1029 08:29:15.349030       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1029 08:29:15.354124       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 08:29:15.354200       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 08:29:15.354672       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1029 08:29:15.355427       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1029 08:29:15.449996       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1029 08:29:15.455185       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 08:29:15.455950       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1029 08:29:38.284180       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1029 08:29:38.284203       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1029 08:29:38.284226       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1029 08:29:38.284277       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1029 08:29:38.284399       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1029 08:29:38.284439       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e929c2bf2e22e3d43031a0c5164218c679959943006abcb3c582df56efee46e0] <==
	I1029 08:29:53.176508       1 serving.go:386] Generated self-signed cert in-memory
	W1029 08:29:55.124229       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1029 08:29:55.124279       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1029 08:29:55.124290       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1029 08:29:55.124297       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1029 08:29:55.214380       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1029 08:29:55.214416       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 08:29:55.222233       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1029 08:29:55.228368       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 08:29:55.228408       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 08:29:55.228440       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1029 08:29:55.329022       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 08:37:51 functional-546837 kubelet[4037]: E1029 08:37:51.703265    4037 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-q27vz" podUID="e223ad9f-119f-4952-974f-39d8762fcb5e"
	Oct 29 08:38:02 functional-546837 kubelet[4037]: E1029 08:38:02.703569    4037 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-r8mwc" podUID="2c3280d6-0927-4a4b-bf3e-263965e53c99"
	Oct 29 08:38:05 functional-546837 kubelet[4037]: E1029 08:38:05.703051    4037 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-q27vz" podUID="e223ad9f-119f-4952-974f-39d8762fcb5e"
	Oct 29 08:38:13 functional-546837 kubelet[4037]: E1029 08:38:13.703927    4037 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-r8mwc" podUID="2c3280d6-0927-4a4b-bf3e-263965e53c99"
	Oct 29 08:38:16 functional-546837 kubelet[4037]: E1029 08:38:16.704273    4037 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-q27vz" podUID="e223ad9f-119f-4952-974f-39d8762fcb5e"
	Oct 29 08:38:25 functional-546837 kubelet[4037]: E1029 08:38:25.702985    4037 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-r8mwc" podUID="2c3280d6-0927-4a4b-bf3e-263965e53c99"
	Oct 29 08:38:28 functional-546837 kubelet[4037]: E1029 08:38:28.703514    4037 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-q27vz" podUID="e223ad9f-119f-4952-974f-39d8762fcb5e"
	Oct 29 08:38:39 functional-546837 kubelet[4037]: E1029 08:38:39.703744    4037 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-r8mwc" podUID="2c3280d6-0927-4a4b-bf3e-263965e53c99"
	Oct 29 08:38:40 functional-546837 kubelet[4037]: E1029 08:38:40.704632    4037 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-q27vz" podUID="e223ad9f-119f-4952-974f-39d8762fcb5e"
	Oct 29 08:38:53 functional-546837 kubelet[4037]: E1029 08:38:53.703579    4037 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-r8mwc" podUID="2c3280d6-0927-4a4b-bf3e-263965e53c99"
	Oct 29 08:38:53 functional-546837 kubelet[4037]: E1029 08:38:53.704170    4037 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-q27vz" podUID="e223ad9f-119f-4952-974f-39d8762fcb5e"
	Oct 29 08:39:07 functional-546837 kubelet[4037]: E1029 08:39:07.703756    4037 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-q27vz" podUID="e223ad9f-119f-4952-974f-39d8762fcb5e"
	Oct 29 08:39:08 functional-546837 kubelet[4037]: E1029 08:39:08.703169    4037 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-r8mwc" podUID="2c3280d6-0927-4a4b-bf3e-263965e53c99"
	Oct 29 08:39:21 functional-546837 kubelet[4037]: E1029 08:39:21.703427    4037 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-r8mwc" podUID="2c3280d6-0927-4a4b-bf3e-263965e53c99"
	Oct 29 08:39:22 functional-546837 kubelet[4037]: E1029 08:39:22.703147    4037 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-q27vz" podUID="e223ad9f-119f-4952-974f-39d8762fcb5e"
	Oct 29 08:39:33 functional-546837 kubelet[4037]: E1029 08:39:33.703102    4037 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-q27vz" podUID="e223ad9f-119f-4952-974f-39d8762fcb5e"
	Oct 29 08:39:36 functional-546837 kubelet[4037]: E1029 08:39:36.704093    4037 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-r8mwc" podUID="2c3280d6-0927-4a4b-bf3e-263965e53c99"
	Oct 29 08:39:47 functional-546837 kubelet[4037]: E1029 08:39:47.703501    4037 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-q27vz" podUID="e223ad9f-119f-4952-974f-39d8762fcb5e"
	Oct 29 08:39:51 functional-546837 kubelet[4037]: E1029 08:39:51.703842    4037 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-r8mwc" podUID="2c3280d6-0927-4a4b-bf3e-263965e53c99"
	Oct 29 08:40:02 functional-546837 kubelet[4037]: E1029 08:40:02.703727    4037 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-q27vz" podUID="e223ad9f-119f-4952-974f-39d8762fcb5e"
	Oct 29 08:40:06 functional-546837 kubelet[4037]: E1029 08:40:06.704412    4037 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-r8mwc" podUID="2c3280d6-0927-4a4b-bf3e-263965e53c99"
	Oct 29 08:40:14 functional-546837 kubelet[4037]: E1029 08:40:14.704990    4037 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-q27vz" podUID="e223ad9f-119f-4952-974f-39d8762fcb5e"
	Oct 29 08:40:21 functional-546837 kubelet[4037]: E1029 08:40:21.703180    4037 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-r8mwc" podUID="2c3280d6-0927-4a4b-bf3e-263965e53c99"
	Oct 29 08:40:25 functional-546837 kubelet[4037]: E1029 08:40:25.703674    4037 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-q27vz" podUID="e223ad9f-119f-4952-974f-39d8762fcb5e"
	Oct 29 08:40:34 functional-546837 kubelet[4037]: E1029 08:40:34.703598    4037 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-r8mwc" podUID="2c3280d6-0927-4a4b-bf3e-263965e53c99"
	
	
	==> storage-provisioner [ea69cd64a02e4f25e8e69dcd31a00ec6451b6c3777c416ba901c0ad196562582] <==
	I1029 08:29:09.853412       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1029 08:29:14.217100       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1029 08:29:14.217167       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1029 08:29:14.268639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:29:17.745068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:29:22.009225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:29:25.607909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:29:28.661860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:29:31.684251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:29:31.696589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 08:29:31.696774       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1029 08:29:31.696935       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"01bb1abb-095c-4531-8862-a524b969b033", APIVersion:"v1", ResourceVersion:"601", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-546837_433b1615-5b50-4a17-a0a8-5f27c28e05be became leader
	I1029 08:29:31.699616       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-546837_433b1615-5b50-4a17-a0a8-5f27c28e05be!
	W1029 08:29:31.706272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:29:31.709992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 08:29:31.800457       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-546837_433b1615-5b50-4a17-a0a8-5f27c28e05be!
	W1029 08:29:33.713649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:29:33.721758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:29:35.725452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:29:35.734394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:29:37.738101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:29:37.745055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ff4750ab5dd7e23dbc745cf59deed29310541a32e5c9be32db6ceee60486f5bd] <==
	W1029 08:40:12.265689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:40:14.269285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:40:14.273835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:40:16.277964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:40:16.285754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:40:18.288227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:40:18.292740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:40:20.296931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:40:20.301221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:40:22.304194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:40:22.308586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:40:24.311050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:40:24.318790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:40:26.322247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:40:26.326445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:40:28.329861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:40:28.336895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:40:30.339902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:40:30.344411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:40:32.347411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:40:32.351535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:40:34.354574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:40:34.361034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:40:36.364621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:40:36.371547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-546837 -n functional-546837
helpers_test.go:269: (dbg) Run:  kubectl --context functional-546837 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-q27vz hello-node-connect-7d85dfc575-r8mwc
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-546837 describe pod hello-node-75c85bcc94-q27vz hello-node-connect-7d85dfc575-r8mwc
helpers_test.go:290: (dbg) kubectl --context functional-546837 describe pod hello-node-75c85bcc94-q27vz hello-node-connect-7d85dfc575-r8mwc:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-q27vz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-546837/192.168.49.2
	Start Time:       Wed, 29 Oct 2025 08:30:50 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v5v27 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-v5v27:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m47s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-q27vz to functional-546837
	  Warning  Failed     7m2s (x5 over 9m47s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m2s (x5 over 9m47s)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m46s (x19 over 9m47s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m31s (x20 over 9m47s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Normal   Pulling    4m18s (x6 over 9m47s)   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-r8mwc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-546837/192.168.49.2
	Start Time:       Wed, 29 Oct 2025 08:30:34 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s5tww (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-s5tww:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-r8mwc to functional-546837
	  Normal   Pulling    7m6s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m6s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m6s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m50s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m39s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-arm64 -p functional-546837 image ls --format short --alsologtostderr: (2.252707562s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-546837 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-546837 image ls --format short --alsologtostderr:
I1029 08:40:59.293860   33279 out.go:360] Setting OutFile to fd 1 ...
I1029 08:40:59.293958   33279 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1029 08:40:59.294034   33279 out.go:374] Setting ErrFile to fd 2...
I1029 08:40:59.294042   33279 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1029 08:40:59.294314   33279 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
I1029 08:40:59.294902   33279 config.go:182] Loaded profile config "functional-546837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1029 08:40:59.295006   33279 config.go:182] Loaded profile config "functional-546837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1029 08:40:59.295514   33279 cli_runner.go:164] Run: docker container inspect functional-546837 --format={{.State.Status}}
I1029 08:40:59.314686   33279 ssh_runner.go:195] Run: systemctl --version
I1029 08:40:59.314748   33279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546837
I1029 08:40:59.334477   33279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/functional-546837/id_rsa Username:docker}
I1029 08:40:59.439306   33279 ssh_runner.go:195] Run: sudo crictl images --output json
I1029 08:41:01.465016   33279 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.025671469s)
W1029 08:41:01.465119   33279 cache_images.go:736] Failed to list images for profile functional-546837 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1029 08:41:01.462484    7351 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="image:{}"
time="2025-10-29T08:41:01Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
functional_test.go:290: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 image load --daemon kicbase/echo-server:functional-546837 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-546837" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 image load --daemon kicbase/echo-server:functional-546837 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-546837" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-546837
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 image load --daemon kicbase/echo-server:functional-546837 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-546837" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 image save kicbase/echo-server:functional-546837 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1029 08:30:23.922760   28546 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:30:23.924464   28546 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:30:23.924515   28546 out.go:374] Setting ErrFile to fd 2...
	I1029 08:30:23.924537   28546 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:30:23.924936   28546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 08:30:23.926002   28546 config.go:182] Loaded profile config "functional-546837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:30:23.926155   28546 config.go:182] Loaded profile config "functional-546837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:30:23.926659   28546 cli_runner.go:164] Run: docker container inspect functional-546837 --format={{.State.Status}}
	I1029 08:30:23.958726   28546 ssh_runner.go:195] Run: systemctl --version
	I1029 08:30:23.958780   28546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546837
	I1029 08:30:23.992601   28546 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/functional-546837/id_rsa Username:docker}
	I1029 08:30:24.115798   28546 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1029 08:30:24.115859   28546 cache_images.go:255] Failed to load cached images for "functional-546837": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1029 08:30:24.115878   28546 cache_images.go:267] failed pushing to: functional-546837

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-546837
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 image save --daemon kicbase/echo-server:functional-546837 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-546837
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-546837: exit status 1 (40.611194ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-546837

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-546837

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-546837 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-546837 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-q27vz" [e223ad9f-119f-4952-974f-39d8762fcb5e] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1029 08:30:57.265110    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:33:13.387663    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:33:41.106586    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:38:13.387733    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-546837 -n functional-546837
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-29 08:40:50.893936312 +0000 UTC m=+1233.913513447
functional_test.go:1460: (dbg) Run:  kubectl --context functional-546837 describe po hello-node-75c85bcc94-q27vz -n default
functional_test.go:1460: (dbg) kubectl --context functional-546837 describe po hello-node-75c85bcc94-q27vz -n default:
Name:             hello-node-75c85bcc94-q27vz
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-546837/192.168.49.2
Start Time:       Wed, 29 Oct 2025 08:30:50 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v5v27 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-v5v27:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-q27vz to functional-546837
Warning  Failed     7m16s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m16s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     5m (x19 over 10m)     kubelet            Error: ImagePullBackOff
Normal   BackOff    4m45s (x20 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Normal   Pulling    4m32s (x6 over 10m)   kubelet            Pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-546837 logs hello-node-75c85bcc94-q27vz -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-546837 logs hello-node-75c85bcc94-q27vz -n default: exit status 1 (139.06313ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-q27vz" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-546837 logs hello-node-75c85bcc94-q27vz -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-546837 service --namespace=default --https --url hello-node: exit status 115 (500.054023ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:32527
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-546837 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-546837 service hello-node --url --format={{.IP}}: exit status 115 (502.894383ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-546837 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-546837 service hello-node --url: exit status 115 (637.825034ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:32527
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-546837 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32527
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (529.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-894836 stop --alsologtostderr -v 5: (26.998855019s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 start --wait true --alsologtostderr -v 5
E1029 08:48:08.446846    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:48:13.389902    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:50:24.584038    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:50:52.288534    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:53:13.387731    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:55:24.584484    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-894836 start --wait true --alsologtostderr -v 5: exit status 80 (8m19.406145082s)

                                                
                                                
-- stdout --
	* [ha-894836] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21800
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-894836" primary control-plane node in "ha-894836" cluster
	* Pulling base image v0.0.48-1760939008-21773 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Enabled addons: 
	
	* Starting "ha-894836-m02" control-plane node in "ha-894836" cluster
	* Pulling base image v0.0.48-1760939008-21773 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	* Starting "ha-894836-m03" control-plane node in "ha-894836" cluster
	* Pulling base image v0.0.48-1760939008-21773 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2,192.168.49.3
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - env NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2,192.168.49.3
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:47:21.529499   51643 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:47:21.529606   51643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:47:21.529621   51643 out.go:374] Setting ErrFile to fd 2...
	I1029 08:47:21.529626   51643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:47:21.529872   51643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 08:47:21.530226   51643 out.go:368] Setting JSON to false
	I1029 08:47:21.531000   51643 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1793,"bootTime":1761725848,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1029 08:47:21.531062   51643 start.go:143] virtualization:  
	I1029 08:47:21.534496   51643 out.go:179] * [ha-894836] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1029 08:47:21.538440   51643 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 08:47:21.538583   51643 notify.go:221] Checking for updates...
	I1029 08:47:21.544526   51643 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 08:47:21.547326   51643 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 08:47:21.550152   51643 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	I1029 08:47:21.553042   51643 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1029 08:47:21.555854   51643 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 08:47:21.559195   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:47:21.559391   51643 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 08:47:21.590221   51643 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1029 08:47:21.590337   51643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:47:21.646530   51643 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-29 08:47:21.636887182 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 08:47:21.646636   51643 docker.go:319] overlay module found
	I1029 08:47:21.651571   51643 out.go:179] * Using the docker driver based on existing profile
	I1029 08:47:21.654406   51643 start.go:309] selected driver: docker
	I1029 08:47:21.654426   51643 start.go:930] validating driver "docker" against &{Name:ha-894836 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-894836 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 08:47:21.654576   51643 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 08:47:21.654673   51643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:47:21.713521   51643 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-29 08:47:21.703756989 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 08:47:21.713963   51643 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 08:47:21.713998   51643 cni.go:84] Creating CNI manager for ""
	I1029 08:47:21.714048   51643 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1029 08:47:21.714093   51643 start.go:353] cluster config:
	{Name:ha-894836 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-894836 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 08:47:21.719068   51643 out.go:179] * Starting "ha-894836" primary control-plane node in "ha-894836" cluster
	I1029 08:47:21.721819   51643 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 08:47:21.724835   51643 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 08:47:21.727599   51643 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:47:21.727626   51643 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 08:47:21.727647   51643 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1029 08:47:21.727666   51643 cache.go:59] Caching tarball of preloaded images
	I1029 08:47:21.727743   51643 preload.go:233] Found /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1029 08:47:21.727753   51643 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 08:47:21.727909   51643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/config.json ...
	I1029 08:47:21.745168   51643 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 08:47:21.745191   51643 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 08:47:21.745207   51643 cache.go:233] Successfully downloaded all kic artifacts
	I1029 08:47:21.745229   51643 start.go:360] acquireMachinesLock for ha-894836: {Name:mk81ec6bdb62bf512bc2903a97ef9ba531fecfa0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 08:47:21.745296   51643 start.go:364] duration metric: took 49.552µs to acquireMachinesLock for "ha-894836"
	I1029 08:47:21.745320   51643 start.go:96] Skipping create...Using existing machine configuration
	I1029 08:47:21.745329   51643 fix.go:54] fixHost starting: 
	I1029 08:47:21.745587   51643 cli_runner.go:164] Run: docker container inspect ha-894836 --format={{.State.Status}}
	I1029 08:47:21.762859   51643 fix.go:112] recreateIfNeeded on ha-894836: state=Stopped err=<nil>
	W1029 08:47:21.762919   51643 fix.go:138] unexpected machine state, will restart: <nil>
	I1029 08:47:21.766255   51643 out.go:252] * Restarting existing docker container for "ha-894836" ...
	I1029 08:47:21.766345   51643 cli_runner.go:164] Run: docker start ha-894836
	I1029 08:47:22.012669   51643 cli_runner.go:164] Run: docker container inspect ha-894836 --format={{.State.Status}}
	I1029 08:47:22.033117   51643 kic.go:430] container "ha-894836" state is running.
	I1029 08:47:22.033526   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836
	I1029 08:47:22.057333   51643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/config.json ...
	I1029 08:47:22.057589   51643 machine.go:94] provisionDockerMachine start ...
	I1029 08:47:22.057651   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:22.080561   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:22.080896   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1029 08:47:22.080906   51643 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 08:47:22.081644   51643 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1029 08:47:25.232635   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-894836
	
	I1029 08:47:25.232719   51643 ubuntu.go:182] provisioning hostname "ha-894836"
	I1029 08:47:25.232811   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:25.251060   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:25.251387   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1029 08:47:25.251404   51643 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-894836 && echo "ha-894836" | sudo tee /etc/hostname
	I1029 08:47:25.413694   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-894836
	
	I1029 08:47:25.413779   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:25.431658   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:25.431987   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1029 08:47:25.432010   51643 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-894836' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-894836/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-894836' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 08:47:25.580597   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 08:47:25.580622   51643 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-2763/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-2763/.minikube}
	I1029 08:47:25.580654   51643 ubuntu.go:190] setting up certificates
	I1029 08:47:25.580671   51643 provision.go:84] configureAuth start
	I1029 08:47:25.580734   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836
	I1029 08:47:25.598256   51643 provision.go:143] copyHostCerts
	I1029 08:47:25.598293   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 08:47:25.598330   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem, removing ...
	I1029 08:47:25.598336   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 08:47:25.598412   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem (1082 bytes)
	I1029 08:47:25.598503   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 08:47:25.598519   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem, removing ...
	I1029 08:47:25.598523   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 08:47:25.598549   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem (1123 bytes)
	I1029 08:47:25.598597   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 08:47:25.598618   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem, removing ...
	I1029 08:47:25.598622   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 08:47:25.598646   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem (1679 bytes)
	I1029 08:47:25.598700   51643 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem org=jenkins.ha-894836 san=[127.0.0.1 192.168.49.2 ha-894836 localhost minikube]
	I1029 08:47:26.140516   51643 provision.go:177] copyRemoteCerts
	I1029 08:47:26.140603   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 08:47:26.140697   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:26.157969   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836/id_rsa Username:docker}
	I1029 08:47:26.259769   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1029 08:47:26.259831   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 08:47:26.276774   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1029 08:47:26.276833   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 08:47:26.294325   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1029 08:47:26.294387   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1029 08:47:26.312588   51643 provision.go:87] duration metric: took 731.894787ms to configureAuth
	I1029 08:47:26.312652   51643 ubuntu.go:206] setting minikube options for container-runtime
	I1029 08:47:26.312914   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:47:26.313019   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:26.330542   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:26.330847   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1029 08:47:26.330868   51643 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 08:47:26.749842   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 08:47:26.749867   51643 machine.go:97] duration metric: took 4.692267534s to provisionDockerMachine
	I1029 08:47:26.749878   51643 start.go:293] postStartSetup for "ha-894836" (driver="docker")
	I1029 08:47:26.749923   51643 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 08:47:26.750004   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 08:47:26.750092   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:26.771117   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836/id_rsa Username:docker}
	I1029 08:47:26.878934   51643 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 08:47:26.882605   51643 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 08:47:26.882634   51643 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 08:47:26.882646   51643 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/addons for local assets ...
	I1029 08:47:26.882718   51643 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/files for local assets ...
	I1029 08:47:26.882831   51643 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> 45502.pem in /etc/ssl/certs
	I1029 08:47:26.882843   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> /etc/ssl/certs/45502.pem
	I1029 08:47:26.882991   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 08:47:26.891148   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /etc/ssl/certs/45502.pem (1708 bytes)
	I1029 08:47:26.909280   51643 start.go:296] duration metric: took 159.355379ms for postStartSetup
	I1029 08:47:26.909405   51643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 08:47:26.909466   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:26.925846   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836/id_rsa Username:docker}
	I1029 08:47:27.025507   51643 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 08:47:27.030364   51643 fix.go:56] duration metric: took 5.285027579s for fixHost
	I1029 08:47:27.030393   51643 start.go:83] releasing machines lock for "ha-894836", held for 5.285083572s
	I1029 08:47:27.030473   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836
	I1029 08:47:27.046867   51643 ssh_runner.go:195] Run: cat /version.json
	I1029 08:47:27.046908   51643 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 08:47:27.046925   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:27.046972   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:27.072712   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836/id_rsa Username:docker}
	I1029 08:47:27.075970   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836/id_rsa Username:docker}
	I1029 08:47:27.176083   51643 ssh_runner.go:195] Run: systemctl --version
	I1029 08:47:27.271259   51643 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 08:47:27.306996   51643 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 08:47:27.311297   51643 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 08:47:27.311362   51643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 08:47:27.318983   51643 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1029 08:47:27.319008   51643 start.go:496] detecting cgroup driver to use...
	I1029 08:47:27.319038   51643 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1029 08:47:27.319083   51643 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 08:47:27.334445   51643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 08:47:27.347545   51643 docker.go:218] disabling cri-docker service (if available) ...
	I1029 08:47:27.347636   51643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 08:47:27.363332   51643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 08:47:27.376173   51643 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 08:47:27.492370   51643 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 08:47:27.612596   51643 docker.go:234] disabling docker service ...
	I1029 08:47:27.612724   51643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 08:47:27.628742   51643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 08:47:27.643114   51643 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 08:47:27.769923   51643 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 08:47:27.894105   51643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 08:47:27.906720   51643 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 08:47:27.921611   51643 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 08:47:27.921734   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:27.930389   51643 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1029 08:47:27.930505   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:27.939285   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:27.947870   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:27.956623   51643 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 08:47:27.965519   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:27.974392   51643 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:27.982657   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:27.991382   51643 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 08:47:27.999251   51643 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 08:47:28.008477   51643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:47:28.138673   51643 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 08:47:28.265137   51643 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 08:47:28.265257   51643 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 08:47:28.269363   51643 start.go:564] Will wait 60s for crictl version
	I1029 08:47:28.269468   51643 ssh_runner.go:195] Run: which crictl
	I1029 08:47:28.273391   51643 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 08:47:28.298305   51643 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 08:47:28.298482   51643 ssh_runner.go:195] Run: crio --version
	I1029 08:47:28.332193   51643 ssh_runner.go:195] Run: crio --version
	I1029 08:47:28.363359   51643 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 08:47:28.366252   51643 cli_runner.go:164] Run: docker network inspect ha-894836 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 08:47:28.382546   51643 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1029 08:47:28.386569   51643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:47:28.396854   51643 kubeadm.go:884] updating cluster {Name:ha-894836 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-894836 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 08:47:28.397006   51643 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:47:28.397068   51643 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 08:47:28.434678   51643 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 08:47:28.434703   51643 crio.go:433] Images already preloaded, skipping extraction
	I1029 08:47:28.434770   51643 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 08:47:28.460074   51643 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 08:47:28.460096   51643 cache_images.go:86] Images are preloaded, skipping loading
	I1029 08:47:28.460105   51643 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1029 08:47:28.460221   51643 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-894836 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-894836 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 08:47:28.460331   51643 ssh_runner.go:195] Run: crio config
	I1029 08:47:28.513402   51643 cni.go:84] Creating CNI manager for ""
	I1029 08:47:28.513423   51643 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1029 08:47:28.513438   51643 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1029 08:47:28.513462   51643 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-894836 NodeName:ha-894836 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 08:47:28.513598   51643 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-894836"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 08:47:28.513621   51643 kube-vip.go:115] generating kube-vip config ...
	I1029 08:47:28.513670   51643 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1029 08:47:28.525412   51643 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1029 08:47:28.525541   51643 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1029 08:47:28.525629   51643 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 08:47:28.533537   51643 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 08:47:28.533649   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1029 08:47:28.541256   51643 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1029 08:47:28.554128   51643 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 08:47:28.567304   51643 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1029 08:47:28.580046   51643 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1029 08:47:28.592794   51643 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1029 08:47:28.596388   51643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:47:28.605938   51643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:47:28.721205   51643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 08:47:28.736487   51643 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836 for IP: 192.168.49.2
	I1029 08:47:28.736507   51643 certs.go:195] generating shared ca certs ...
	I1029 08:47:28.736536   51643 certs.go:227] acquiring lock for ca certs: {Name:mk715611293338c39436fe9072c5ce0c2fb993a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:47:28.736703   51643 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key
	I1029 08:47:28.736755   51643 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key
	I1029 08:47:28.736768   51643 certs.go:257] generating profile certs ...
	I1029 08:47:28.736855   51643 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.key
	I1029 08:47:28.736885   51643 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key.9555b31c
	I1029 08:47:28.736902   51643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt.9555b31c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1029 08:47:29.326544   51643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt.9555b31c ...
	I1029 08:47:29.326575   51643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt.9555b31c: {Name:mk2c66c1b3a93815ffa793a9ebfc638bd973efe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:47:29.326766   51643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key.9555b31c ...
	I1029 08:47:29.326783   51643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key.9555b31c: {Name:mk64676774836dc306d0667653f14bbfbbb06e3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:47:29.326872   51643 certs.go:382] copying /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt.9555b31c -> /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt
	I1029 08:47:29.327021   51643 certs.go:386] copying /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key.9555b31c -> /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key
	I1029 08:47:29.327155   51643 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key
	I1029 08:47:29.327173   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1029 08:47:29.327190   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1029 08:47:29.327208   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1029 08:47:29.327227   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1029 08:47:29.327243   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1029 08:47:29.327257   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1029 08:47:29.327275   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1029 08:47:29.327286   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1029 08:47:29.327336   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem (1338 bytes)
	W1029 08:47:29.327368   51643 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550_empty.pem, impossibly tiny 0 bytes
	I1029 08:47:29.327380   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem (1679 bytes)
	I1029 08:47:29.327404   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem (1082 bytes)
	I1029 08:47:29.327429   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem (1123 bytes)
	I1029 08:47:29.327455   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem (1679 bytes)
	I1029 08:47:29.327499   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem (1708 bytes)
	I1029 08:47:29.327529   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem -> /usr/share/ca-certificates/4550.pem
	I1029 08:47:29.327546   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> /usr/share/ca-certificates/45502.pem
	I1029 08:47:29.327560   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:47:29.328197   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 08:47:29.346024   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 08:47:29.368215   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 08:47:29.401494   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1029 08:47:29.429372   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1029 08:47:29.456963   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 08:47:29.488058   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 08:47:29.518940   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1029 08:47:29.566867   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem --> /usr/share/ca-certificates/4550.pem (1338 bytes)
	I1029 08:47:29.611519   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /usr/share/ca-certificates/45502.pem (1708 bytes)
	I1029 08:47:29.660809   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 08:47:29.699081   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 08:47:29.722213   51643 ssh_runner.go:195] Run: openssl version
	I1029 08:47:29.732266   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4550.pem && ln -fs /usr/share/ca-certificates/4550.pem /etc/ssl/certs/4550.pem"
	I1029 08:47:29.745012   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4550.pem
	I1029 08:47:29.751640   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:27 /usr/share/ca-certificates/4550.pem
	I1029 08:47:29.751710   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4550.pem
	I1029 08:47:29.814511   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4550.pem /etc/ssl/certs/51391683.0"
	I1029 08:47:29.826133   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/45502.pem && ln -fs /usr/share/ca-certificates/45502.pem /etc/ssl/certs/45502.pem"
	I1029 08:47:29.838154   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/45502.pem
	I1029 08:47:29.844165   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:27 /usr/share/ca-certificates/45502.pem
	I1029 08:47:29.844232   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/45502.pem
	I1029 08:47:29.905999   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/45502.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 08:47:29.913848   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 08:47:29.924235   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:47:29.932561   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:47:29.932629   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:47:29.989153   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 08:47:29.997241   51643 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 08:47:30.008565   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1029 08:47:30.100996   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1029 08:47:30.148023   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1029 08:47:30.205555   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1029 08:47:30.248683   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1029 08:47:30.291195   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1029 08:47:30.333318   51643 kubeadm.go:401] StartCluster: {Name:ha-894836 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-894836 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 08:47:30.333452   51643 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:47:30.333514   51643 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:47:30.363953   51643 cri.go:89] found id: "e00d3f78d68d909f0332f199fdaf28199771c94a7e8d59cc954f4172c68c75fe"
	I1029 08:47:30.363975   51643 cri.go:89] found id: "a917c056972ea87cbf263c90930d10cb54f7d7c4f044215f8091e6dc6ec698fe"
	I1029 08:47:30.363981   51643 cri.go:89] found id: "67e5abbb69757832239af83063ef76100de2cec956cd044965ac792572fce7d8"
	I1029 08:47:30.363984   51643 cri.go:89] found id: "ffcbb54d6ce4436f5aec8bb9428ef3aa2b15fa9ee4079908fa14d7ee16acbc0c"
	I1029 08:47:30.363987   51643 cri.go:89] found id: "c5012e77d5995d67461a19df092ba7b0617af55e88a4f413560ffb01b7c5dd86"
	I1029 08:47:30.363991   51643 cri.go:89] found id: ""
	I1029 08:47:30.364037   51643 ssh_runner.go:195] Run: sudo runc list -f json
	W1029 08:47:30.375323   51643 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:47:30Z" level=error msg="open /run/runc: no such file or directory"
	I1029 08:47:30.375401   51643 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 08:47:30.385470   51643 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1029 08:47:30.385492   51643 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1029 08:47:30.385554   51643 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1029 08:47:30.394291   51643 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1029 08:47:30.394701   51643 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-894836" does not appear in /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 08:47:30.394803   51643 kubeconfig.go:62] /home/jenkins/minikube-integration/21800-2763/kubeconfig needs updating (will repair): [kubeconfig missing "ha-894836" cluster setting kubeconfig missing "ha-894836" context setting]
	I1029 08:47:30.395074   51643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:47:30.395601   51643 kapi.go:59] client config for ha-894836: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.crt", KeyFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.key", CAFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1029 08:47:30.396079   51643 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1029 08:47:30.396100   51643 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1029 08:47:30.396107   51643 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1029 08:47:30.396112   51643 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1029 08:47:30.396116   51643 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1029 08:47:30.396600   51643 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1029 08:47:30.396732   51643 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1029 08:47:30.405937   51643 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1029 08:47:30.405963   51643 kubeadm.go:602] duration metric: took 20.455594ms to restartPrimaryControlPlane
	I1029 08:47:30.405973   51643 kubeadm.go:403] duration metric: took 72.664815ms to StartCluster
	I1029 08:47:30.405988   51643 settings.go:142] acquiring lock: {Name:mkeba1c0d8e3656c2a05b6b6f1f81184498df216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:47:30.406062   51643 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 08:47:30.406653   51643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:47:30.406844   51643 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 08:47:30.406872   51643 start.go:242] waiting for startup goroutines ...
	I1029 08:47:30.406887   51643 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 08:47:30.407409   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:47:30.412586   51643 out.go:179] * Enabled addons: 
	I1029 08:47:30.415502   51643 addons.go:515] duration metric: took 8.615131ms for enable addons: enabled=[]
	I1029 08:47:30.415550   51643 start.go:247] waiting for cluster config update ...
	I1029 08:47:30.415564   51643 start.go:256] writing updated cluster config ...
	I1029 08:47:30.418838   51643 out.go:203] 
	I1029 08:47:30.421986   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:47:30.422163   51643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/config.json ...
	I1029 08:47:30.425622   51643 out.go:179] * Starting "ha-894836-m02" control-plane node in "ha-894836" cluster
	I1029 08:47:30.428500   51643 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 08:47:30.431446   51643 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 08:47:30.434321   51643 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:47:30.434374   51643 cache.go:59] Caching tarball of preloaded images
	I1029 08:47:30.434516   51643 preload.go:233] Found /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1029 08:47:30.434549   51643 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 08:47:30.434704   51643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/config.json ...
	I1029 08:47:30.434965   51643 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 08:47:30.469091   51643 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 08:47:30.469113   51643 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 08:47:30.469126   51643 cache.go:233] Successfully downloaded all kic artifacts
	I1029 08:47:30.469150   51643 start.go:360] acquireMachinesLock for ha-894836-m02: {Name:mkb930aec8192c14094c9c711c93e26847bf9202 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 08:47:30.469207   51643 start.go:364] duration metric: took 40.936µs to acquireMachinesLock for "ha-894836-m02"
	I1029 08:47:30.469228   51643 start.go:96] Skipping create...Using existing machine configuration
	I1029 08:47:30.469233   51643 fix.go:54] fixHost starting: m02
	I1029 08:47:30.469504   51643 cli_runner.go:164] Run: docker container inspect ha-894836-m02 --format={{.State.Status}}
	I1029 08:47:30.500880   51643 fix.go:112] recreateIfNeeded on ha-894836-m02: state=Stopped err=<nil>
	W1029 08:47:30.500905   51643 fix.go:138] unexpected machine state, will restart: <nil>
	I1029 08:47:30.506548   51643 out.go:252] * Restarting existing docker container for "ha-894836-m02" ...
	I1029 08:47:30.506637   51643 cli_runner.go:164] Run: docker start ha-894836-m02
	I1029 08:47:30.853634   51643 cli_runner.go:164] Run: docker container inspect ha-894836-m02 --format={{.State.Status}}
	I1029 08:47:30.880386   51643 kic.go:430] container "ha-894836-m02" state is running.
	I1029 08:47:30.880745   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836-m02
	I1029 08:47:30.905743   51643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/config.json ...
	I1029 08:47:30.905982   51643 machine.go:94] provisionDockerMachine start ...
	I1029 08:47:30.906048   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:30.933559   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:30.933904   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1029 08:47:30.933913   51643 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 08:47:30.934536   51643 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55068->127.0.0.1:32813: read: connection reset by peer
	I1029 08:47:34.203957   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-894836-m02
	
	I1029 08:47:34.204004   51643 ubuntu.go:182] provisioning hostname "ha-894836-m02"
	I1029 08:47:34.204076   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:34.234369   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:34.234685   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1029 08:47:34.234703   51643 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-894836-m02 && echo "ha-894836-m02" | sudo tee /etc/hostname
	I1029 08:47:34.542369   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-894836-m02
	
	I1029 08:47:34.542516   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:34.574456   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:34.574762   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1029 08:47:34.574779   51643 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-894836-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-894836-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-894836-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 08:47:34.827546   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 08:47:34.827578   51643 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-2763/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-2763/.minikube}
	I1029 08:47:34.827603   51643 ubuntu.go:190] setting up certificates
	I1029 08:47:34.827638   51643 provision.go:84] configureAuth start
	I1029 08:47:34.827714   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836-m02
	I1029 08:47:34.862097   51643 provision.go:143] copyHostCerts
	I1029 08:47:34.862139   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 08:47:34.862171   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem, removing ...
	I1029 08:47:34.862183   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 08:47:34.862258   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem (1082 bytes)
	I1029 08:47:34.862339   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 08:47:34.862362   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem, removing ...
	I1029 08:47:34.862367   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 08:47:34.862394   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem (1123 bytes)
	I1029 08:47:34.862440   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 08:47:34.862461   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem, removing ...
	I1029 08:47:34.862469   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 08:47:34.862496   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem (1679 bytes)
	I1029 08:47:34.862545   51643 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem org=jenkins.ha-894836-m02 san=[127.0.0.1 192.168.49.3 ha-894836-m02 localhost minikube]
	I1029 08:47:35.182658   51643 provision.go:177] copyRemoteCerts
	I1029 08:47:35.182745   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 08:47:35.182793   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:35.201881   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m02/id_rsa Username:docker}
	I1029 08:47:35.346712   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1029 08:47:35.346775   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 08:47:35.384129   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1029 08:47:35.384198   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1029 08:47:35.415588   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1029 08:47:35.415653   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 08:47:35.457021   51643 provision.go:87] duration metric: took 629.369458ms to configureAuth
	I1029 08:47:35.457058   51643 ubuntu.go:206] setting minikube options for container-runtime
	I1029 08:47:35.457378   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:47:35.457501   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:35.485978   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:35.486288   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1029 08:47:35.486309   51643 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 08:47:35.984048   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 08:47:35.984077   51643 machine.go:97] duration metric: took 5.078076838s to provisionDockerMachine
	I1029 08:47:35.984093   51643 start.go:293] postStartSetup for "ha-894836-m02" (driver="docker")
	I1029 08:47:35.984105   51643 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 08:47:35.984167   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 08:47:35.984212   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:36.009654   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m02/id_rsa Username:docker}
	I1029 08:47:36.121479   51643 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 08:47:36.125706   51643 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 08:47:36.125737   51643 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 08:47:36.125748   51643 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/addons for local assets ...
	I1029 08:47:36.125802   51643 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/files for local assets ...
	I1029 08:47:36.125883   51643 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> 45502.pem in /etc/ssl/certs
	I1029 08:47:36.125902   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> /etc/ssl/certs/45502.pem
	I1029 08:47:36.126006   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 08:47:36.133908   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /etc/ssl/certs/45502.pem (1708 bytes)
	I1029 08:47:36.152562   51643 start.go:296] duration metric: took 168.452944ms for postStartSetup
	I1029 08:47:36.152710   51643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 08:47:36.152752   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:36.170976   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m02/id_rsa Username:docker}
	I1029 08:47:36.276973   51643 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 08:47:36.287814   51643 fix.go:56] duration metric: took 5.818573756s for fixHost
	I1029 08:47:36.287841   51643 start.go:83] releasing machines lock for "ha-894836-m02", held for 5.818626179s
	I1029 08:47:36.287916   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836-m02
	I1029 08:47:36.328488   51643 out.go:179] * Found network options:
	I1029 08:47:36.331520   51643 out.go:179]   - NO_PROXY=192.168.49.2
	W1029 08:47:36.337513   51643 proxy.go:120] fail to check proxy env: Error ip not in block
	W1029 08:47:36.337573   51643 proxy.go:120] fail to check proxy env: Error ip not in block
	I1029 08:47:36.337636   51643 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 08:47:36.337690   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:36.337952   51643 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 08:47:36.338007   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:36.372705   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m02/id_rsa Username:docker}
	I1029 08:47:36.382161   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m02/id_rsa Username:docker}
	I1029 08:47:36.725650   51643 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 08:47:36.732748   51643 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 08:47:36.732831   51643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 08:47:36.748828   51643 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1029 08:47:36.748854   51643 start.go:496] detecting cgroup driver to use...
	I1029 08:47:36.748899   51643 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1029 08:47:36.748976   51643 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 08:47:36.774113   51643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 08:47:36.799926   51643 docker.go:218] disabling cri-docker service (if available) ...
	I1029 08:47:36.800009   51643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 08:47:36.821641   51643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 08:47:36.838818   51643 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 08:47:37.085073   51643 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 08:47:37.283501   51643 docker.go:234] disabling docker service ...
	I1029 08:47:37.283581   51643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 08:47:37.306704   51643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 08:47:37.329115   51643 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 08:47:37.528935   51643 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 08:47:37.724811   51643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 08:47:37.745385   51643 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 08:47:37.766616   51643 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 08:47:37.766687   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:37.777687   51643 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1029 08:47:37.777763   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:37.790547   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:37.805597   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:37.824888   51643 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 08:47:37.833592   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:37.847509   51643 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:37.857690   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:37.870682   51643 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 08:47:37.881416   51643 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 08:47:37.893784   51643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:47:38.130979   51643 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 08:47:38.346041   51643 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 08:47:38.346156   51643 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 08:47:38.350264   51643 start.go:564] Will wait 60s for crictl version
	I1029 08:47:38.350326   51643 ssh_runner.go:195] Run: which crictl
	I1029 08:47:38.353928   51643 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 08:47:38.381039   51643 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 08:47:38.381134   51643 ssh_runner.go:195] Run: crio --version
	I1029 08:47:38.409799   51643 ssh_runner.go:195] Run: crio --version
	I1029 08:47:38.443728   51643 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 08:47:38.446621   51643 out.go:179]   - env NO_PROXY=192.168.49.2
	I1029 08:47:38.449812   51643 cli_runner.go:164] Run: docker network inspect ha-894836 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 08:47:38.466711   51643 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1029 08:47:38.470765   51643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:47:38.480879   51643 mustload.go:66] Loading cluster: ha-894836
	I1029 08:47:38.481131   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:47:38.481434   51643 cli_runner.go:164] Run: docker container inspect ha-894836 --format={{.State.Status}}
	I1029 08:47:38.498248   51643 host.go:66] Checking if "ha-894836" exists ...
	I1029 08:47:38.498544   51643 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836 for IP: 192.168.49.3
	I1029 08:47:38.498558   51643 certs.go:195] generating shared ca certs ...
	I1029 08:47:38.498572   51643 certs.go:227] acquiring lock for ca certs: {Name:mk715611293338c39436fe9072c5ce0c2fb993a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:47:38.498695   51643 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key
	I1029 08:47:38.498747   51643 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key
	I1029 08:47:38.498755   51643 certs.go:257] generating profile certs ...
	I1029 08:47:38.498831   51643 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.key
	I1029 08:47:38.498903   51643 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key.d4a7ec17
	I1029 08:47:38.498943   51643 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key
	I1029 08:47:38.498962   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1029 08:47:38.498975   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1029 08:47:38.498991   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1029 08:47:38.499002   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1029 08:47:38.499012   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1029 08:47:38.499039   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1029 08:47:38.499054   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1029 08:47:38.499064   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1029 08:47:38.499118   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem (1338 bytes)
	W1029 08:47:38.499148   51643 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550_empty.pem, impossibly tiny 0 bytes
	I1029 08:47:38.499158   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem (1679 bytes)
	I1029 08:47:38.499189   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem (1082 bytes)
	I1029 08:47:38.499215   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem (1123 bytes)
	I1029 08:47:38.499239   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem (1679 bytes)
	I1029 08:47:38.499284   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem (1708 bytes)
	I1029 08:47:38.499315   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem -> /usr/share/ca-certificates/4550.pem
	I1029 08:47:38.499335   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> /usr/share/ca-certificates/45502.pem
	I1029 08:47:38.499349   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:47:38.499410   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:38.516805   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836/id_rsa Username:docker}
	I1029 08:47:38.612647   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1029 08:47:38.616561   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1029 08:47:38.624748   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1029 08:47:38.628258   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1029 08:47:38.637180   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1029 08:47:38.640891   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1029 08:47:38.650214   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1029 08:47:38.653972   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1029 08:47:38.662619   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1029 08:47:38.666317   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1029 08:47:38.674366   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1029 08:47:38.678199   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1029 08:47:38.686306   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 08:47:38.706856   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 08:47:38.724221   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 08:47:38.741317   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1029 08:47:38.759079   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1029 08:47:38.777104   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 08:47:38.794767   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 08:47:38.812149   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1029 08:47:38.830280   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem --> /usr/share/ca-certificates/4550.pem (1338 bytes)
	I1029 08:47:38.849527   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /usr/share/ca-certificates/45502.pem (1708 bytes)
	I1029 08:47:38.870347   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 08:47:38.890190   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1029 08:47:38.904271   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1029 08:47:38.917479   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1029 08:47:38.930520   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1029 08:47:38.945717   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1029 08:47:38.959276   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1029 08:47:38.972479   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1029 08:47:38.985067   51643 ssh_runner.go:195] Run: openssl version
	I1029 08:47:38.991454   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4550.pem && ln -fs /usr/share/ca-certificates/4550.pem /etc/ssl/certs/4550.pem"
	I1029 08:47:38.999996   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4550.pem
	I1029 08:47:39.004703   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:27 /usr/share/ca-certificates/4550.pem
	I1029 08:47:39.004780   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4550.pem
	I1029 08:47:39.050207   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4550.pem /etc/ssl/certs/51391683.0"
	I1029 08:47:39.058997   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/45502.pem && ln -fs /usr/share/ca-certificates/45502.pem /etc/ssl/certs/45502.pem"
	I1029 08:47:39.067821   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/45502.pem
	I1029 08:47:39.071762   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:27 /usr/share/ca-certificates/45502.pem
	I1029 08:47:39.071826   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/45502.pem
	I1029 08:47:39.113725   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/45502.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 08:47:39.121907   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 08:47:39.130312   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:47:39.134430   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:47:39.134513   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:47:39.176116   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 08:47:39.184143   51643 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 08:47:39.188071   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1029 08:47:39.229804   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1029 08:47:39.271125   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1029 08:47:39.314420   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1029 08:47:39.358357   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1029 08:47:39.404199   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1029 08:47:39.450657   51643 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1029 08:47:39.450775   51643 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-894836-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-894836 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 08:47:39.450808   51643 kube-vip.go:115] generating kube-vip config ...
	I1029 08:47:39.450861   51643 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1029 08:47:39.462795   51643 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1029 08:47:39.462879   51643 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1029 08:47:39.462977   51643 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 08:47:39.471222   51643 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 08:47:39.471296   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1029 08:47:39.480280   51643 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1029 08:47:39.493347   51643 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 08:47:39.506856   51643 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1029 08:47:39.521570   51643 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1029 08:47:39.525461   51643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:47:39.536266   51643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:47:39.680061   51643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 08:47:39.694883   51643 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 08:47:39.695320   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:47:39.699488   51643 out.go:179] * Verifying Kubernetes components...
	I1029 08:47:39.702679   51643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:47:39.837549   51643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 08:47:39.854606   51643 kapi.go:59] client config for ha-894836: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.crt", KeyFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.key", CAFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1029 08:47:39.854679   51643 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1029 08:47:39.854929   51643 node_ready.go:35] waiting up to 6m0s for node "ha-894836-m02" to be "Ready" ...
	W1029 08:47:49.857769   51643 node_ready.go:55] error getting node "ha-894836-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-894836-m02": net/http: TLS handshake timeout
	I1029 08:47:52.860254   51643 node_ready.go:49] node "ha-894836-m02" is "Ready"
	I1029 08:47:52.860290   51643 node_ready.go:38] duration metric: took 13.005340499s for node "ha-894836-m02" to be "Ready" ...
	I1029 08:47:52.860304   51643 api_server.go:52] waiting for apiserver process to appear ...
	I1029 08:47:52.860384   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:53.361211   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:53.860507   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:54.360916   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:54.860446   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:55.361159   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:55.860486   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:56.361306   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:56.860828   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:57.360541   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:57.860525   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:58.361238   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:58.374939   51643 api_server.go:72] duration metric: took 18.680010468s to wait for apiserver process to appear ...
	I1029 08:47:58.374971   51643 api_server.go:88] waiting for apiserver healthz status ...
	I1029 08:47:58.374992   51643 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1029 08:47:58.386476   51643 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1029 08:47:58.388170   51643 api_server.go:141] control plane version: v1.34.1
	I1029 08:47:58.388195   51643 api_server.go:131] duration metric: took 13.217297ms to wait for apiserver health ...
	I1029 08:47:58.388204   51643 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 08:47:58.397073   51643 system_pods.go:59] 26 kube-system pods found
	I1029 08:47:58.397155   51643 system_pods.go:61] "coredns-66bc5c9577-hhhxx" [e56e0269-e45a-43e3-a77e-177a0a756b40] Running
	I1029 08:47:58.397179   51643 system_pods.go:61] "coredns-66bc5c9577-vcp67" [f0f6bb79-544e-4586-aef9-3a82b1c78ecc] Running
	I1029 08:47:58.397217   51643 system_pods.go:61] "etcd-ha-894836" [5cd4d1f7-1dcb-4100-a31e-208ccc817ea3] Running
	I1029 08:47:58.397245   51643 system_pods.go:61] "etcd-ha-894836-m02" [2a90d177-9fd1-49e1-8c1e-79e3a1b5c413] Running
	I1029 08:47:58.397271   51643 system_pods.go:61] "etcd-ha-894836-m03" [6cd41576-e310-4635-9b94-f2d09bfe4222] Running
	I1029 08:47:58.397328   51643 system_pods.go:61] "kindnet-bjfp7" [dea5c187-a5ec-4d74-90e1-bf9c51c5bb6f] Running
	I1029 08:47:58.397356   51643 system_pods.go:61] "kindnet-hg69g" [8938d12e-502d-4a8c-84a5-018253ac53ba] Running
	I1029 08:47:58.397405   51643 system_pods.go:61] "kindnet-q8tvb" [1da0da6b-7d7f-45c0-9dab-afd839431062] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1029 08:47:58.397432   51643 system_pods.go:61] "kindnet-qkxpk" [a5470a24-fa80-424b-b421-001526b2593b] Running
	I1029 08:47:58.397457   51643 system_pods.go:61] "kube-apiserver-ha-894836" [b94cee38-e526-4d61-a186-f91144703115] Running
	I1029 08:47:58.397494   51643 system_pods.go:61] "kube-apiserver-ha-894836-m02" [c3caf692-d34f-4888-a75f-456b448a2676] Running
	I1029 08:47:58.397520   51643 system_pods.go:61] "kube-apiserver-ha-894836-m03" [8c8e2229-e880-40d7-824c-cb83b74bb8f5] Running
	I1029 08:47:58.397554   51643 system_pods.go:61] "kube-controller-manager-ha-894836" [310aa2d6-f3db-4980-bd00-c377cfdc9246] Running
	I1029 08:47:58.397597   51643 system_pods.go:61] "kube-controller-manager-ha-894836-m02" [d0f22e91-0e21-46b7-b40c-4b6837e3595f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 08:47:58.397620   51643 system_pods.go:61] "kube-controller-manager-ha-894836-m03" [455529ad-15de-4b00-b3f8-389c14c89a53] Running
	I1029 08:47:58.397668   51643 system_pods.go:61] "kube-proxy-59nqf" [849e97d0-893f-428e-9146-cd4ddf60b718] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1029 08:47:58.397697   51643 system_pods.go:61] "kube-proxy-bprsj" [927e6e10-9052-4c58-8eee-98a7e1c134dc] Running
	I1029 08:47:58.397724   51643 system_pods.go:61] "kube-proxy-gd8g6" [cbfb04f1-2bc7-4683-b99f-079f27c7b5e2] Running
	I1029 08:47:58.397756   51643 system_pods.go:61] "kube-proxy-gxrz7" [b0ef623f-f7ad-4b5a-8d1e-b08dc6d1ce80] Running
	I1029 08:47:58.397780   51643 system_pods.go:61] "kube-scheduler-ha-894836" [da7be70f-32ae-474c-a25a-a4e7a6e02653] Running
	I1029 08:47:58.397802   51643 system_pods.go:61] "kube-scheduler-ha-894836-m02" [cd22d36a-aab6-49ba-bbad-376526393820] Running
	I1029 08:47:58.397842   51643 system_pods.go:61] "kube-scheduler-ha-894836-m03" [5c88adc4-d9d3-42d1-aac9-550c356f755f] Running
	I1029 08:47:58.397867   51643 system_pods.go:61] "kube-vip-ha-894836" [3304e5b5-10a5-4362-855f-966f12e19513] Running
	I1029 08:47:58.397978   51643 system_pods.go:61] "kube-vip-ha-894836-m02" [79aaa612-a92e-4c41-a92a-c4bc904d64b2] Running
	I1029 08:47:58.398003   51643 system_pods.go:61] "kube-vip-ha-894836-m03" [1ce7bac8-8c0a-41fc-9cc9-db0417bd4da7] Running
	I1029 08:47:58.398030   51643 system_pods.go:61] "storage-provisioner" [74a003fb-b5cc-4ffa-8560-fd41d1257bd6] Running
	I1029 08:47:58.398069   51643 system_pods.go:74] duration metric: took 9.856974ms to wait for pod list to return data ...
	I1029 08:47:58.398098   51643 default_sa.go:34] waiting for default service account to be created ...
	I1029 08:47:58.402325   51643 default_sa.go:45] found service account: "default"
	I1029 08:47:58.402401   51643 default_sa.go:55] duration metric: took 4.283713ms for default service account to be created ...
	I1029 08:47:58.402426   51643 system_pods.go:116] waiting for k8s-apps to be running ...
	I1029 08:47:58.411486   51643 system_pods.go:86] 26 kube-system pods found
	I1029 08:47:58.411568   51643 system_pods.go:89] "coredns-66bc5c9577-hhhxx" [e56e0269-e45a-43e3-a77e-177a0a756b40] Running
	I1029 08:47:58.411592   51643 system_pods.go:89] "coredns-66bc5c9577-vcp67" [f0f6bb79-544e-4586-aef9-3a82b1c78ecc] Running
	I1029 08:47:58.411631   51643 system_pods.go:89] "etcd-ha-894836" [5cd4d1f7-1dcb-4100-a31e-208ccc817ea3] Running
	I1029 08:47:58.411661   51643 system_pods.go:89] "etcd-ha-894836-m02" [2a90d177-9fd1-49e1-8c1e-79e3a1b5c413] Running
	I1029 08:47:58.411686   51643 system_pods.go:89] "etcd-ha-894836-m03" [6cd41576-e310-4635-9b94-f2d09bfe4222] Running
	I1029 08:47:58.411725   51643 system_pods.go:89] "kindnet-bjfp7" [dea5c187-a5ec-4d74-90e1-bf9c51c5bb6f] Running
	I1029 08:47:58.411755   51643 system_pods.go:89] "kindnet-hg69g" [8938d12e-502d-4a8c-84a5-018253ac53ba] Running
	I1029 08:47:58.411785   51643 system_pods.go:89] "kindnet-q8tvb" [1da0da6b-7d7f-45c0-9dab-afd839431062] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1029 08:47:58.411826   51643 system_pods.go:89] "kindnet-qkxpk" [a5470a24-fa80-424b-b421-001526b2593b] Running
	I1029 08:47:58.411849   51643 system_pods.go:89] "kube-apiserver-ha-894836" [b94cee38-e526-4d61-a186-f91144703115] Running
	I1029 08:47:58.411887   51643 system_pods.go:89] "kube-apiserver-ha-894836-m02" [c3caf692-d34f-4888-a75f-456b448a2676] Running
	I1029 08:47:58.411913   51643 system_pods.go:89] "kube-apiserver-ha-894836-m03" [8c8e2229-e880-40d7-824c-cb83b74bb8f5] Running
	I1029 08:47:58.411942   51643 system_pods.go:89] "kube-controller-manager-ha-894836" [310aa2d6-f3db-4980-bd00-c377cfdc9246] Running
	I1029 08:47:58.411982   51643 system_pods.go:89] "kube-controller-manager-ha-894836-m02" [d0f22e91-0e21-46b7-b40c-4b6837e3595f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 08:47:58.412004   51643 system_pods.go:89] "kube-controller-manager-ha-894836-m03" [455529ad-15de-4b00-b3f8-389c14c89a53] Running
	I1029 08:47:58.412046   51643 system_pods.go:89] "kube-proxy-59nqf" [849e97d0-893f-428e-9146-cd4ddf60b718] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1029 08:47:58.412074   51643 system_pods.go:89] "kube-proxy-bprsj" [927e6e10-9052-4c58-8eee-98a7e1c134dc] Running
	I1029 08:47:58.412099   51643 system_pods.go:89] "kube-proxy-gd8g6" [cbfb04f1-2bc7-4683-b99f-079f27c7b5e2] Running
	I1029 08:47:58.412131   51643 system_pods.go:89] "kube-proxy-gxrz7" [b0ef623f-f7ad-4b5a-8d1e-b08dc6d1ce80] Running
	I1029 08:47:58.412157   51643 system_pods.go:89] "kube-scheduler-ha-894836" [da7be70f-32ae-474c-a25a-a4e7a6e02653] Running
	I1029 08:47:58.412180   51643 system_pods.go:89] "kube-scheduler-ha-894836-m02" [cd22d36a-aab6-49ba-bbad-376526393820] Running
	I1029 08:47:58.412217   51643 system_pods.go:89] "kube-scheduler-ha-894836-m03" [5c88adc4-d9d3-42d1-aac9-550c356f755f] Running
	I1029 08:47:58.412244   51643 system_pods.go:89] "kube-vip-ha-894836" [3304e5b5-10a5-4362-855f-966f12e19513] Running
	I1029 08:47:58.412269   51643 system_pods.go:89] "kube-vip-ha-894836-m02" [79aaa612-a92e-4c41-a92a-c4bc904d64b2] Running
	I1029 08:47:58.412360   51643 system_pods.go:89] "kube-vip-ha-894836-m03" [1ce7bac8-8c0a-41fc-9cc9-db0417bd4da7] Running
	I1029 08:47:58.412396   51643 system_pods.go:89] "storage-provisioner" [74a003fb-b5cc-4ffa-8560-fd41d1257bd6] Running
	I1029 08:47:58.412419   51643 system_pods.go:126] duration metric: took 9.970092ms to wait for k8s-apps to be running ...
	I1029 08:47:58.412443   51643 system_svc.go:44] waiting for kubelet service to be running ....
	I1029 08:47:58.412532   51643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 08:47:58.430648   51643 system_svc.go:56] duration metric: took 18.183914ms WaitForService to wait for kubelet
	I1029 08:47:58.430727   51643 kubeadm.go:587] duration metric: took 18.735792001s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 08:47:58.430763   51643 node_conditions.go:102] verifying NodePressure condition ...
	I1029 08:47:58.435505   51643 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1029 08:47:58.435585   51643 node_conditions.go:123] node cpu capacity is 2
	I1029 08:47:58.435615   51643 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1029 08:47:58.435636   51643 node_conditions.go:123] node cpu capacity is 2
	I1029 08:47:58.435667   51643 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1029 08:47:58.435691   51643 node_conditions.go:123] node cpu capacity is 2
	I1029 08:47:58.435709   51643 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1029 08:47:58.435750   51643 node_conditions.go:123] node cpu capacity is 2
	I1029 08:47:58.435776   51643 node_conditions.go:105] duration metric: took 4.978006ms to run NodePressure ...
	I1029 08:47:58.435804   51643 start.go:242] waiting for startup goroutines ...
	I1029 08:47:58.435853   51643 start.go:256] writing updated cluster config ...
	I1029 08:47:58.439739   51643 out.go:203] 
	I1029 08:47:58.443690   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:47:58.443882   51643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/config.json ...
	I1029 08:47:58.447597   51643 out.go:179] * Starting "ha-894836-m03" control-plane node in "ha-894836" cluster
	I1029 08:47:58.451296   51643 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 08:47:58.454468   51643 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 08:47:58.457455   51643 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:47:58.457578   51643 cache.go:59] Caching tarball of preloaded images
	I1029 08:47:58.457532   51643 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 08:47:58.457963   51643 preload.go:233] Found /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1029 08:47:58.457997   51643 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 08:47:58.458193   51643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/config.json ...
	I1029 08:47:58.484925   51643 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 08:47:58.484945   51643 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 08:47:58.484957   51643 cache.go:233] Successfully downloaded all kic artifacts
	I1029 08:47:58.484981   51643 start.go:360] acquireMachinesLock for ha-894836-m03: {Name:mkff6279e1eccd0127b32c0d6857db9b3fa3dac9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 08:47:58.485031   51643 start.go:364] duration metric: took 36.152µs to acquireMachinesLock for "ha-894836-m03"
	I1029 08:47:58.485050   51643 start.go:96] Skipping create...Using existing machine configuration
	I1029 08:47:58.485055   51643 fix.go:54] fixHost starting: m03
	I1029 08:47:58.485336   51643 cli_runner.go:164] Run: docker container inspect ha-894836-m03 --format={{.State.Status}}
	I1029 08:47:58.517723   51643 fix.go:112] recreateIfNeeded on ha-894836-m03: state=Stopped err=<nil>
	W1029 08:47:58.517747   51643 fix.go:138] unexpected machine state, will restart: <nil>
	I1029 08:47:58.521056   51643 out.go:252] * Restarting existing docker container for "ha-894836-m03" ...
	I1029 08:47:58.521146   51643 cli_runner.go:164] Run: docker start ha-894836-m03
	I1029 08:47:58.923330   51643 cli_runner.go:164] Run: docker container inspect ha-894836-m03 --format={{.State.Status}}
	I1029 08:47:58.955597   51643 kic.go:430] container "ha-894836-m03" state is running.
	I1029 08:47:58.955975   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836-m03
	I1029 08:47:58.985436   51643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/config.json ...
	I1029 08:47:58.985727   51643 machine.go:94] provisionDockerMachine start ...
	I1029 08:47:58.985800   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:47:59.021071   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:59.021382   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1029 08:47:59.021392   51643 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 08:47:59.022242   51643 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1029 08:48:02.369899   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-894836-m03
	
	I1029 08:48:02.369983   51643 ubuntu.go:182] provisioning hostname "ha-894836-m03"
	I1029 08:48:02.370089   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:48:02.396111   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:48:02.396431   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1029 08:48:02.396444   51643 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-894836-m03 && echo "ha-894836-m03" | sudo tee /etc/hostname
	I1029 08:48:02.706986   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-894836-m03
	
	I1029 08:48:02.707060   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:48:02.732902   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:48:02.733206   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1029 08:48:02.733231   51643 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-894836-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-894836-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-894836-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 08:48:03.018167   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 08:48:03.018188   51643 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-2763/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-2763/.minikube}
	I1029 08:48:03.018211   51643 ubuntu.go:190] setting up certificates
	I1029 08:48:03.018221   51643 provision.go:84] configureAuth start
	I1029 08:48:03.018284   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836-m03
	I1029 08:48:03.051408   51643 provision.go:143] copyHostCerts
	I1029 08:48:03.051450   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 08:48:03.051486   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem, removing ...
	I1029 08:48:03.051493   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 08:48:03.051568   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem (1082 bytes)
	I1029 08:48:03.051644   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 08:48:03.051661   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem, removing ...
	I1029 08:48:03.051666   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 08:48:03.051690   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem (1123 bytes)
	I1029 08:48:03.051728   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 08:48:03.051744   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem, removing ...
	I1029 08:48:03.051748   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 08:48:03.051770   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem (1679 bytes)
	I1029 08:48:03.051815   51643 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem org=jenkins.ha-894836-m03 san=[127.0.0.1 192.168.49.4 ha-894836-m03 localhost minikube]
	I1029 08:48:04.283916   51643 provision.go:177] copyRemoteCerts
	I1029 08:48:04.283985   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 08:48:04.284031   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:48:04.301428   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m03/id_rsa Username:docker}
	I1029 08:48:04.461287   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1029 08:48:04.461367   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 08:48:04.496816   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1029 08:48:04.496881   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1029 08:48:04.527177   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1029 08:48:04.527250   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 08:48:04.556555   51643 provision.go:87] duration metric: took 1.5383197s to configureAuth
	I1029 08:48:04.556585   51643 ubuntu.go:206] setting minikube options for container-runtime
	I1029 08:48:04.556817   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:48:04.556919   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:48:04.581700   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:48:04.581999   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1029 08:48:04.582018   51643 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 08:48:05.181543   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 08:48:05.181567   51643 machine.go:97] duration metric: took 6.195829937s to provisionDockerMachine
	I1029 08:48:05.181589   51643 start.go:293] postStartSetup for "ha-894836-m03" (driver="docker")
	I1029 08:48:05.181600   51643 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 08:48:05.181674   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 08:48:05.181722   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:48:05.207592   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m03/id_rsa Username:docker}
	I1029 08:48:05.322834   51643 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 08:48:05.327694   51643 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 08:48:05.327775   51643 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 08:48:05.327808   51643 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/addons for local assets ...
	I1029 08:48:05.327899   51643 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/files for local assets ...
	I1029 08:48:05.328050   51643 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> 45502.pem in /etc/ssl/certs
	I1029 08:48:05.328079   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> /etc/ssl/certs/45502.pem
	I1029 08:48:05.328256   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 08:48:05.343080   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /etc/ssl/certs/45502.pem (1708 bytes)
	I1029 08:48:05.371323   51643 start.go:296] duration metric: took 189.718932ms for postStartSetup
	I1029 08:48:05.371417   51643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 08:48:05.371455   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:48:05.397947   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m03/id_rsa Username:docker}
	I1029 08:48:05.541458   51643 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 08:48:05.561976   51643 fix.go:56] duration metric: took 7.076913817s for fixHost
	I1029 08:48:05.562004   51643 start.go:83] releasing machines lock for "ha-894836-m03", held for 7.076964665s
	I1029 08:48:05.562072   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836-m03
	I1029 08:48:05.600883   51643 out.go:179] * Found network options:
	I1029 08:48:05.604417   51643 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1029 08:48:05.607757   51643 proxy.go:120] fail to check proxy env: Error ip not in block
	W1029 08:48:05.607793   51643 proxy.go:120] fail to check proxy env: Error ip not in block
	W1029 08:48:05.607816   51643 proxy.go:120] fail to check proxy env: Error ip not in block
	W1029 08:48:05.607826   51643 proxy.go:120] fail to check proxy env: Error ip not in block
	I1029 08:48:05.607887   51643 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 08:48:05.607928   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:48:05.607983   51643 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 08:48:05.608041   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:48:05.654947   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m03/id_rsa Username:docker}
	I1029 08:48:05.658008   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m03/id_rsa Username:docker}
	I1029 08:48:06.130162   51643 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 08:48:06.143305   51643 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 08:48:06.143421   51643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 08:48:06.167460   51643 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1029 08:48:06.167489   51643 start.go:496] detecting cgroup driver to use...
	I1029 08:48:06.167523   51643 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1029 08:48:06.167572   51643 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 08:48:06.213970   51643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 08:48:06.251029   51643 docker.go:218] disabling cri-docker service (if available) ...
	I1029 08:48:06.251087   51643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 08:48:06.290080   51643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 08:48:06.327709   51643 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 08:48:06.726326   51643 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 08:48:07.139091   51643 docker.go:234] disabling docker service ...
	I1029 08:48:07.139182   51643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 08:48:07.178202   51643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 08:48:07.209433   51643 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 08:48:07.608392   51643 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 08:48:08.086947   51643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 08:48:08.121769   51643 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 08:48:08.184236   51643 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 08:48:08.184326   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:48:08.215828   51643 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1029 08:48:08.215914   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:48:08.238638   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:48:08.269033   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:48:08.295262   51643 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 08:48:08.331399   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:48:08.356819   51643 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:48:08.389668   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:48:08.403860   51643 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 08:48:08.423244   51643 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 08:48:08.437579   51643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:48:08.832580   51643 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 08:49:39.275381   51643 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.442758035s)
	I1029 08:49:39.275412   51643 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 08:49:39.275483   51643 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 08:49:39.279771   51643 start.go:564] Will wait 60s for crictl version
	I1029 08:49:39.279855   51643 ssh_runner.go:195] Run: which crictl
	I1029 08:49:39.284759   51643 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 08:49:39.334853   51643 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 08:49:39.334984   51643 ssh_runner.go:195] Run: crio --version
	I1029 08:49:39.371804   51643 ssh_runner.go:195] Run: crio --version
	I1029 08:49:39.405984   51643 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 08:49:39.412429   51643 out.go:179]   - env NO_PROXY=192.168.49.2
	I1029 08:49:39.415504   51643 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1029 08:49:39.418469   51643 cli_runner.go:164] Run: docker network inspect ha-894836 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 08:49:39.435673   51643 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1029 08:49:39.440794   51643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:49:39.451208   51643 mustload.go:66] Loading cluster: ha-894836
	I1029 08:49:39.451471   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:49:39.451781   51643 cli_runner.go:164] Run: docker container inspect ha-894836 --format={{.State.Status}}
	I1029 08:49:39.468915   51643 host.go:66] Checking if "ha-894836" exists ...
	I1029 08:49:39.469188   51643 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836 for IP: 192.168.49.4
	I1029 08:49:39.469202   51643 certs.go:195] generating shared ca certs ...
	I1029 08:49:39.469216   51643 certs.go:227] acquiring lock for ca certs: {Name:mk715611293338c39436fe9072c5ce0c2fb993a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:49:39.469334   51643 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key
	I1029 08:49:39.469401   51643 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key
	I1029 08:49:39.469413   51643 certs.go:257] generating profile certs ...
	I1029 08:49:39.469489   51643 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.key
	I1029 08:49:39.469559   51643 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key.761eb988
	I1029 08:49:39.469601   51643 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key
	I1029 08:49:39.469613   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1029 08:49:39.469625   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1029 08:49:39.469641   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1029 08:49:39.469654   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1029 08:49:39.469666   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1029 08:49:39.469679   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1029 08:49:39.469694   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1029 08:49:39.469705   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1029 08:49:39.469761   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem (1338 bytes)
	W1029 08:49:39.469793   51643 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550_empty.pem, impossibly tiny 0 bytes
	I1029 08:49:39.469805   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem (1679 bytes)
	I1029 08:49:39.469829   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem (1082 bytes)
	I1029 08:49:39.469858   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem (1123 bytes)
	I1029 08:49:39.469887   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem (1679 bytes)
	I1029 08:49:39.469934   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem (1708 bytes)
	I1029 08:49:39.469964   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> /usr/share/ca-certificates/45502.pem
	I1029 08:49:39.469983   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:49:39.469994   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem -> /usr/share/ca-certificates/4550.pem
	I1029 08:49:39.470057   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:49:39.488996   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836/id_rsa Username:docker}
	I1029 08:49:39.588688   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1029 08:49:39.592443   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1029 08:49:39.600773   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1029 08:49:39.604466   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1029 08:49:39.613528   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1029 08:49:39.617112   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1029 08:49:39.625577   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1029 08:49:39.629278   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1029 08:49:39.637493   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1029 08:49:39.641121   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1029 08:49:39.650070   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1029 08:49:39.653954   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1029 08:49:39.662931   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 08:49:39.685107   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 08:49:39.705459   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 08:49:39.724858   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1029 08:49:39.743556   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1029 08:49:39.762456   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 08:49:39.781042   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 08:49:39.803894   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1029 08:49:39.827899   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /usr/share/ca-certificates/45502.pem (1708 bytes)
	I1029 08:49:39.848693   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 08:49:39.875006   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem --> /usr/share/ca-certificates/4550.pem (1338 bytes)
	I1029 08:49:39.895980   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1029 08:49:39.909585   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1029 08:49:39.922536   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1029 08:49:39.935718   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1029 08:49:39.950308   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1029 08:49:39.965160   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1029 08:49:39.979271   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1029 08:49:39.992671   51643 ssh_runner.go:195] Run: openssl version
	I1029 08:49:39.999106   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/45502.pem && ln -fs /usr/share/ca-certificates/45502.pem /etc/ssl/certs/45502.pem"
	I1029 08:49:40.009754   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/45502.pem
	I1029 08:49:40.016736   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:27 /usr/share/ca-certificates/45502.pem
	I1029 08:49:40.016877   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/45502.pem
	I1029 08:49:40.067934   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/45502.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 08:49:40.077186   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 08:49:40.086864   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:49:40.091154   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:49:40.091257   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:49:40.134215   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 08:49:40.142049   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4550.pem && ln -fs /usr/share/ca-certificates/4550.pem /etc/ssl/certs/4550.pem"
	I1029 08:49:40.150815   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4550.pem
	I1029 08:49:40.154732   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:27 /usr/share/ca-certificates/4550.pem
	I1029 08:49:40.154796   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4550.pem
	I1029 08:49:40.196358   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4550.pem /etc/ssl/certs/51391683.0"
	I1029 08:49:40.204753   51643 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 08:49:40.208825   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1029 08:49:40.251130   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1029 08:49:40.293659   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1029 08:49:40.335303   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1029 08:49:40.378403   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1029 08:49:40.419111   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1029 08:49:40.459947   51643 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1029 08:49:40.460045   51643 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-894836-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-894836 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 08:49:40.460074   51643 kube-vip.go:115] generating kube-vip config ...
	I1029 08:49:40.460122   51643 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1029 08:49:40.472263   51643 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1029 08:49:40.472402   51643 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1029 08:49:40.472491   51643 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 08:49:40.482442   51643 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 08:49:40.482527   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1029 08:49:40.491244   51643 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1029 08:49:40.509334   51643 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 08:49:40.522741   51643 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1029 08:49:40.543511   51643 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1029 08:49:40.549027   51643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:49:40.559626   51643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:49:40.700906   51643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 08:49:40.716131   51643 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 08:49:40.716494   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:49:40.720440   51643 out.go:179] * Verifying Kubernetes components...
	I1029 08:49:40.723093   51643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:49:40.849270   51643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 08:49:40.870801   51643 kapi.go:59] client config for ha-894836: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.crt", KeyFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.key", CAFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1029 08:49:40.870875   51643 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1029 08:49:40.871137   51643 node_ready.go:35] waiting up to 6m0s for node "ha-894836-m03" to be "Ready" ...
	W1029 08:49:42.878542   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:49:45.376167   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:49:47.875546   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:49:49.879197   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:49:52.374859   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:49:54.874674   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:49:56.875642   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:49:59.385971   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:01.874925   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:04.375281   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:06.875417   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:08.877527   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:11.374735   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:13.374773   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:15.875423   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:18.374307   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:20.375009   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:22.875458   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:24.875734   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:27.374436   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:29.375591   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:31.875678   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:33.876408   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:36.375279   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:38.875405   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:40.875687   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:43.375139   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:45.376751   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:47.874681   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:50.375198   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:52.874746   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:54.875461   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:57.374875   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:59.375081   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:01.874956   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:03.875571   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:05.875856   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:07.875956   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:10.374910   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:12.375300   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:14.874455   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:16.874501   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:18.881741   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:21.374575   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:23.375182   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:25.875630   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:28.375397   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:30.376726   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:32.874952   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:35.375371   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:37.875672   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:40.374584   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:42.375166   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:44.375299   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:46.875496   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:48.876305   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:51.375111   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:53.375554   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:55.874828   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:58.374446   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:00.391777   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:02.875635   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:05.374696   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:07.875548   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:10.374764   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:12.375076   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:14.874580   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:16.875240   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:18.880605   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:21.375072   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:23.875108   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:26.375196   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:28.375284   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:30.875177   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:32.875570   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:35.374573   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:37.374747   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:39.375982   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:41.875595   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:44.377104   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:46.875402   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:48.877198   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:51.375357   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:53.874734   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:55.875011   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:57.875521   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:00.380590   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:02.876012   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:05.375714   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:07.875383   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:10.374415   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:12.376491   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:14.875713   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:17.375204   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:19.377537   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:21.877439   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:24.375155   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:26.874635   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:28.881623   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:31.374848   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:33.374930   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:35.875771   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:38.375835   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:40.875765   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:43.375167   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:45.874879   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:47.878546   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:50.375661   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:52.875435   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:55.375646   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:57.874489   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:59.875624   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:02.375174   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:04.874940   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:07.375497   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:09.875063   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:11.875223   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:13.875266   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:16.378660   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:18.883945   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:21.374606   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:23.376495   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:25.875496   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:28.375564   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:30.875734   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:33.375292   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:35.875496   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:38.375495   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:40.874844   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:42.874893   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:45.376206   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:47.875511   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:50.375400   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:52.875571   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:55.374747   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:57.374957   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:59.375343   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:01.876012   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:04.374336   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:06.374603   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:08.875609   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:11.375178   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:13.375447   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:15.376425   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:17.874841   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:20.375318   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:22.874543   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:25.375289   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:27.874901   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:30.374710   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:32.375028   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:34.375632   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:36.875017   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:38.877472   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	I1029 08:55:40.871415   51643 node_ready.go:38] duration metric: took 6m0.000252794s for node "ha-894836-m03" to be "Ready" ...
	I1029 08:55:40.874909   51643 out.go:203] 
	W1029 08:55:40.877827   51643 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1029 08:55:40.877849   51643 out.go:285] * 
	* 
	W1029 08:55:40.880012   51643 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:55:40.882934   51643 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-arm64 -p ha-894836 node list --alsologtostderr -v 5" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 node list --alsologtostderr -v 5
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-894836
helpers_test.go:243: (dbg) docker inspect ha-894836:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577",
	        "Created": "2025-10-29T08:41:13.884631643Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 51767,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T08:47:21.800876334Z",
	            "FinishedAt": "2025-10-29T08:47:21.16806896Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577/hostname",
	        "HostsPath": "/var/lib/docker/containers/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577/hosts",
	        "LogPath": "/var/lib/docker/containers/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577-json.log",
	        "Name": "/ha-894836",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-894836:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-894836",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577",
	                "LowerDir": "/var/lib/docker/overlay2/6cb7d98797bde16eca0f4bec3498bd7eec3437fba9aba27a2de6d3809021a168-init/diff:/var/lib/docker/overlay2/512c003c31c6c889d7180677d0a7d04f9641d651bb7c0f829b95ca7e47c2836c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6cb7d98797bde16eca0f4bec3498bd7eec3437fba9aba27a2de6d3809021a168/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6cb7d98797bde16eca0f4bec3498bd7eec3437fba9aba27a2de6d3809021a168/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6cb7d98797bde16eca0f4bec3498bd7eec3437fba9aba27a2de6d3809021a168/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-894836",
	                "Source": "/var/lib/docker/volumes/ha-894836/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-894836",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-894836",
	                "name.minikube.sigs.k8s.io": "ha-894836",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f6e74e15151ebcdec78f0c531e590064d6bb05fc075b51560c345f672aa3c577",
	            "SandboxKey": "/var/run/docker/netns/f6e74e15151e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32808"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32809"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32812"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32810"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32811"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-894836": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:33:dd:d4:71:59",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0687088684ea4c5a5709e0ca87c1a9ca99a57d381b08036eb4f13d9a4d606eb4",
	                    "EndpointID": "8936c5bd5e09c1315f13d32a72ef61578012dcc563588dd57720a11fcdb4992e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-894836",
	                        "40404985106a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-894836 -n ha-894836
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-894836 logs -n 25: (1.585358117s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-894836 cp ha-894836-m03:/home/docker/cp-test.txt ha-894836-m02:/home/docker/cp-test_ha-894836-m03_ha-894836-m02.txt               │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836-m02 sudo cat /home/docker/cp-test_ha-894836-m03_ha-894836-m02.txt                                         │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ cp      │ ha-894836 cp ha-894836-m03:/home/docker/cp-test.txt ha-894836-m04:/home/docker/cp-test_ha-894836-m03_ha-894836-m04.txt               │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836-m04 sudo cat /home/docker/cp-test_ha-894836-m03_ha-894836-m04.txt                                         │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ cp      │ ha-894836 cp testdata/cp-test.txt ha-894836-m04:/home/docker/cp-test.txt                                                             │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ cp      │ ha-894836 cp ha-894836-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1145660143/001/cp-test_ha-894836-m04.txt │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ cp      │ ha-894836 cp ha-894836-m04:/home/docker/cp-test.txt ha-894836:/home/docker/cp-test_ha-894836-m04_ha-894836.txt                       │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836 sudo cat /home/docker/cp-test_ha-894836-m04_ha-894836.txt                                                 │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ cp      │ ha-894836 cp ha-894836-m04:/home/docker/cp-test.txt ha-894836-m02:/home/docker/cp-test_ha-894836-m04_ha-894836-m02.txt               │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836-m02 sudo cat /home/docker/cp-test_ha-894836-m04_ha-894836-m02.txt                                         │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ cp      │ ha-894836 cp ha-894836-m04:/home/docker/cp-test.txt ha-894836-m03:/home/docker/cp-test_ha-894836-m04_ha-894836-m03.txt               │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836-m03 sudo cat /home/docker/cp-test_ha-894836-m04_ha-894836-m03.txt                                         │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ node    │ ha-894836 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ node    │ ha-894836 node start m02 --alsologtostderr -v 5                                                                                      │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ node    │ ha-894836 node list --alsologtostderr -v 5                                                                                           │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │                     │
	│ stop    │ ha-894836 stop --alsologtostderr -v 5                                                                                                │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:47 UTC │
	│ start   │ ha-894836 start --wait true --alsologtostderr -v 5                                                                                   │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:47 UTC │                     │
	│ node    │ ha-894836 node list --alsologtostderr -v 5                                                                                           │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:55 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 08:47:21
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 08:47:21.529499   51643 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:47:21.529606   51643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:47:21.529621   51643 out.go:374] Setting ErrFile to fd 2...
	I1029 08:47:21.529626   51643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:47:21.529872   51643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 08:47:21.530226   51643 out.go:368] Setting JSON to false
	I1029 08:47:21.531000   51643 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1793,"bootTime":1761725848,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1029 08:47:21.531062   51643 start.go:143] virtualization:  
	I1029 08:47:21.534496   51643 out.go:179] * [ha-894836] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1029 08:47:21.538440   51643 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 08:47:21.538583   51643 notify.go:221] Checking for updates...
	I1029 08:47:21.544526   51643 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 08:47:21.547326   51643 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 08:47:21.550152   51643 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	I1029 08:47:21.553042   51643 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1029 08:47:21.555854   51643 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 08:47:21.559195   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:47:21.559391   51643 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 08:47:21.590221   51643 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1029 08:47:21.590337   51643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:47:21.646530   51643 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-29 08:47:21.636887182 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 08:47:21.646636   51643 docker.go:319] overlay module found
	I1029 08:47:21.651571   51643 out.go:179] * Using the docker driver based on existing profile
	I1029 08:47:21.654406   51643 start.go:309] selected driver: docker
	I1029 08:47:21.654426   51643 start.go:930] validating driver "docker" against &{Name:ha-894836 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-894836 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 08:47:21.654576   51643 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 08:47:21.654673   51643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:47:21.713521   51643 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-29 08:47:21.703756989 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 08:47:21.713963   51643 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 08:47:21.713998   51643 cni.go:84] Creating CNI manager for ""
	I1029 08:47:21.714048   51643 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1029 08:47:21.714093   51643 start.go:353] cluster config:
	{Name:ha-894836 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-894836 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 08:47:21.719068   51643 out.go:179] * Starting "ha-894836" primary control-plane node in "ha-894836" cluster
	I1029 08:47:21.721819   51643 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 08:47:21.724835   51643 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 08:47:21.727599   51643 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:47:21.727626   51643 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 08:47:21.727647   51643 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1029 08:47:21.727666   51643 cache.go:59] Caching tarball of preloaded images
	I1029 08:47:21.727743   51643 preload.go:233] Found /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1029 08:47:21.727753   51643 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 08:47:21.727909   51643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/config.json ...
	I1029 08:47:21.745168   51643 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 08:47:21.745191   51643 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 08:47:21.745207   51643 cache.go:233] Successfully downloaded all kic artifacts
	I1029 08:47:21.745229   51643 start.go:360] acquireMachinesLock for ha-894836: {Name:mk81ec6bdb62bf512bc2903a97ef9ba531fecfa0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 08:47:21.745296   51643 start.go:364] duration metric: took 49.552µs to acquireMachinesLock for "ha-894836"
	I1029 08:47:21.745320   51643 start.go:96] Skipping create...Using existing machine configuration
	I1029 08:47:21.745329   51643 fix.go:54] fixHost starting: 
	I1029 08:47:21.745587   51643 cli_runner.go:164] Run: docker container inspect ha-894836 --format={{.State.Status}}
	I1029 08:47:21.762859   51643 fix.go:112] recreateIfNeeded on ha-894836: state=Stopped err=<nil>
	W1029 08:47:21.762919   51643 fix.go:138] unexpected machine state, will restart: <nil>
	I1029 08:47:21.766255   51643 out.go:252] * Restarting existing docker container for "ha-894836" ...
	I1029 08:47:21.766345   51643 cli_runner.go:164] Run: docker start ha-894836
	I1029 08:47:22.012669   51643 cli_runner.go:164] Run: docker container inspect ha-894836 --format={{.State.Status}}
	I1029 08:47:22.033117   51643 kic.go:430] container "ha-894836" state is running.
	I1029 08:47:22.033526   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836
	I1029 08:47:22.057333   51643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/config.json ...
	I1029 08:47:22.057589   51643 machine.go:94] provisionDockerMachine start ...
	I1029 08:47:22.057651   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:22.080561   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:22.080896   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1029 08:47:22.080906   51643 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 08:47:22.081644   51643 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1029 08:47:25.232635   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-894836
	
	I1029 08:47:25.232719   51643 ubuntu.go:182] provisioning hostname "ha-894836"
	I1029 08:47:25.232811   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:25.251060   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:25.251387   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1029 08:47:25.251404   51643 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-894836 && echo "ha-894836" | sudo tee /etc/hostname
	I1029 08:47:25.413694   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-894836
	
	I1029 08:47:25.413779   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:25.431658   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:25.431987   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1029 08:47:25.432010   51643 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-894836' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-894836/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-894836' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 08:47:25.580597   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 08:47:25.580622   51643 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-2763/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-2763/.minikube}
	I1029 08:47:25.580654   51643 ubuntu.go:190] setting up certificates
	I1029 08:47:25.580671   51643 provision.go:84] configureAuth start
	I1029 08:47:25.580734   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836
	I1029 08:47:25.598256   51643 provision.go:143] copyHostCerts
	I1029 08:47:25.598293   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 08:47:25.598330   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem, removing ...
	I1029 08:47:25.598336   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 08:47:25.598412   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem (1082 bytes)
	I1029 08:47:25.598503   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 08:47:25.598519   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem, removing ...
	I1029 08:47:25.598523   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 08:47:25.598549   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem (1123 bytes)
	I1029 08:47:25.598597   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 08:47:25.598618   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem, removing ...
	I1029 08:47:25.598622   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 08:47:25.598646   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem (1679 bytes)
	I1029 08:47:25.598700   51643 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem org=jenkins.ha-894836 san=[127.0.0.1 192.168.49.2 ha-894836 localhost minikube]
	I1029 08:47:26.140516   51643 provision.go:177] copyRemoteCerts
	I1029 08:47:26.140603   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 08:47:26.140697   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:26.157969   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836/id_rsa Username:docker}
	I1029 08:47:26.259769   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1029 08:47:26.259831   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 08:47:26.276774   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1029 08:47:26.276833   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 08:47:26.294325   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1029 08:47:26.294387   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1029 08:47:26.312588   51643 provision.go:87] duration metric: took 731.894787ms to configureAuth
	I1029 08:47:26.312652   51643 ubuntu.go:206] setting minikube options for container-runtime
	I1029 08:47:26.312914   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:47:26.313019   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:26.330542   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:26.330847   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1029 08:47:26.330868   51643 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 08:47:26.749842   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 08:47:26.749867   51643 machine.go:97] duration metric: took 4.692267534s to provisionDockerMachine
	I1029 08:47:26.749878   51643 start.go:293] postStartSetup for "ha-894836" (driver="docker")
	I1029 08:47:26.749923   51643 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 08:47:26.750004   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 08:47:26.750092   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:26.771117   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836/id_rsa Username:docker}
	I1029 08:47:26.878934   51643 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 08:47:26.882605   51643 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 08:47:26.882634   51643 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 08:47:26.882646   51643 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/addons for local assets ...
	I1029 08:47:26.882718   51643 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/files for local assets ...
	I1029 08:47:26.882831   51643 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> 45502.pem in /etc/ssl/certs
	I1029 08:47:26.882843   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> /etc/ssl/certs/45502.pem
	I1029 08:47:26.882991   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 08:47:26.891148   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /etc/ssl/certs/45502.pem (1708 bytes)
	I1029 08:47:26.909280   51643 start.go:296] duration metric: took 159.355379ms for postStartSetup
	I1029 08:47:26.909405   51643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 08:47:26.909466   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:26.925846   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836/id_rsa Username:docker}
	I1029 08:47:27.025507   51643 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 08:47:27.030364   51643 fix.go:56] duration metric: took 5.285027579s for fixHost
	I1029 08:47:27.030393   51643 start.go:83] releasing machines lock for "ha-894836", held for 5.285083572s
	I1029 08:47:27.030473   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836
	I1029 08:47:27.046867   51643 ssh_runner.go:195] Run: cat /version.json
	I1029 08:47:27.046908   51643 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 08:47:27.046925   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:27.046972   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:27.072712   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836/id_rsa Username:docker}
	I1029 08:47:27.075970   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836/id_rsa Username:docker}
	I1029 08:47:27.176083   51643 ssh_runner.go:195] Run: systemctl --version
	I1029 08:47:27.271259   51643 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 08:47:27.306996   51643 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 08:47:27.311297   51643 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 08:47:27.311362   51643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 08:47:27.318983   51643 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1029 08:47:27.319008   51643 start.go:496] detecting cgroup driver to use...
	I1029 08:47:27.319038   51643 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1029 08:47:27.319083   51643 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 08:47:27.334445   51643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 08:47:27.347545   51643 docker.go:218] disabling cri-docker service (if available) ...
	I1029 08:47:27.347636   51643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 08:47:27.363332   51643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 08:47:27.376173   51643 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 08:47:27.492370   51643 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 08:47:27.612596   51643 docker.go:234] disabling docker service ...
	I1029 08:47:27.612724   51643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 08:47:27.628742   51643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 08:47:27.643114   51643 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 08:47:27.769923   51643 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 08:47:27.894105   51643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 08:47:27.906720   51643 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 08:47:27.921611   51643 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 08:47:27.921734   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:27.930389   51643 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1029 08:47:27.930505   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:27.939285   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:27.947870   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:27.956623   51643 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 08:47:27.965519   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:27.974392   51643 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:27.982657   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:27.991382   51643 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 08:47:27.999251   51643 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 08:47:28.008477   51643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:47:28.138673   51643 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 08:47:28.265137   51643 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 08:47:28.265257   51643 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 08:47:28.269363   51643 start.go:564] Will wait 60s for crictl version
	I1029 08:47:28.269468   51643 ssh_runner.go:195] Run: which crictl
	I1029 08:47:28.273391   51643 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 08:47:28.298305   51643 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 08:47:28.298482   51643 ssh_runner.go:195] Run: crio --version
	I1029 08:47:28.332193   51643 ssh_runner.go:195] Run: crio --version
	I1029 08:47:28.363359   51643 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 08:47:28.366252   51643 cli_runner.go:164] Run: docker network inspect ha-894836 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 08:47:28.382546   51643 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1029 08:47:28.386569   51643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:47:28.396854   51643 kubeadm.go:884] updating cluster {Name:ha-894836 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-894836 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 08:47:28.397006   51643 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:47:28.397068   51643 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 08:47:28.434678   51643 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 08:47:28.434703   51643 crio.go:433] Images already preloaded, skipping extraction
	I1029 08:47:28.434770   51643 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 08:47:28.460074   51643 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 08:47:28.460096   51643 cache_images.go:86] Images are preloaded, skipping loading
	I1029 08:47:28.460105   51643 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1029 08:47:28.460221   51643 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-894836 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-894836 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 08:47:28.460331   51643 ssh_runner.go:195] Run: crio config
	I1029 08:47:28.513402   51643 cni.go:84] Creating CNI manager for ""
	I1029 08:47:28.513423   51643 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1029 08:47:28.513438   51643 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1029 08:47:28.513462   51643 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-894836 NodeName:ha-894836 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 08:47:28.513598   51643 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-894836"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 08:47:28.513621   51643 kube-vip.go:115] generating kube-vip config ...
	I1029 08:47:28.513670   51643 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1029 08:47:28.525412   51643 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1029 08:47:28.525541   51643 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1029 08:47:28.525629   51643 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 08:47:28.533537   51643 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 08:47:28.533649   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1029 08:47:28.541256   51643 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1029 08:47:28.554128   51643 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 08:47:28.567304   51643 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1029 08:47:28.580046   51643 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1029 08:47:28.592794   51643 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1029 08:47:28.596388   51643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:47:28.605938   51643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:47:28.721205   51643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 08:47:28.736487   51643 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836 for IP: 192.168.49.2
	I1029 08:47:28.736507   51643 certs.go:195] generating shared ca certs ...
	I1029 08:47:28.736536   51643 certs.go:227] acquiring lock for ca certs: {Name:mk715611293338c39436fe9072c5ce0c2fb993a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:47:28.736703   51643 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key
	I1029 08:47:28.736755   51643 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key
	I1029 08:47:28.736768   51643 certs.go:257] generating profile certs ...
	I1029 08:47:28.736855   51643 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.key
	I1029 08:47:28.736885   51643 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key.9555b31c
	I1029 08:47:28.736902   51643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt.9555b31c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1029 08:47:29.326544   51643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt.9555b31c ...
	I1029 08:47:29.326575   51643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt.9555b31c: {Name:mk2c66c1b3a93815ffa793a9ebfc638bd973efe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:47:29.326766   51643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key.9555b31c ...
	I1029 08:47:29.326783   51643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key.9555b31c: {Name:mk64676774836dc306d0667653f14bbfbbb06e3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:47:29.326872   51643 certs.go:382] copying /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt.9555b31c -> /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt
	I1029 08:47:29.327021   51643 certs.go:386] copying /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key.9555b31c -> /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key
	I1029 08:47:29.327155   51643 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key
	I1029 08:47:29.327173   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1029 08:47:29.327190   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1029 08:47:29.327208   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1029 08:47:29.327227   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1029 08:47:29.327243   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1029 08:47:29.327257   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1029 08:47:29.327275   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1029 08:47:29.327286   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1029 08:47:29.327336   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem (1338 bytes)
	W1029 08:47:29.327368   51643 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550_empty.pem, impossibly tiny 0 bytes
	I1029 08:47:29.327380   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem (1679 bytes)
	I1029 08:47:29.327404   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem (1082 bytes)
	I1029 08:47:29.327429   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem (1123 bytes)
	I1029 08:47:29.327455   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem (1679 bytes)
	I1029 08:47:29.327499   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem (1708 bytes)
	I1029 08:47:29.327529   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem -> /usr/share/ca-certificates/4550.pem
	I1029 08:47:29.327546   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> /usr/share/ca-certificates/45502.pem
	I1029 08:47:29.327560   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:47:29.328197   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 08:47:29.346024   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 08:47:29.368215   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 08:47:29.401494   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1029 08:47:29.429372   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1029 08:47:29.456963   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 08:47:29.488058   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 08:47:29.518940   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1029 08:47:29.566867   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem --> /usr/share/ca-certificates/4550.pem (1338 bytes)
	I1029 08:47:29.611519   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /usr/share/ca-certificates/45502.pem (1708 bytes)
	I1029 08:47:29.660809   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 08:47:29.699081   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 08:47:29.722213   51643 ssh_runner.go:195] Run: openssl version
	I1029 08:47:29.732266   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4550.pem && ln -fs /usr/share/ca-certificates/4550.pem /etc/ssl/certs/4550.pem"
	I1029 08:47:29.745012   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4550.pem
	I1029 08:47:29.751640   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:27 /usr/share/ca-certificates/4550.pem
	I1029 08:47:29.751710   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4550.pem
	I1029 08:47:29.814511   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4550.pem /etc/ssl/certs/51391683.0"
	I1029 08:47:29.826133   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/45502.pem && ln -fs /usr/share/ca-certificates/45502.pem /etc/ssl/certs/45502.pem"
	I1029 08:47:29.838154   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/45502.pem
	I1029 08:47:29.844165   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:27 /usr/share/ca-certificates/45502.pem
	I1029 08:47:29.844232   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/45502.pem
	I1029 08:47:29.905999   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/45502.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 08:47:29.913848   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 08:47:29.924235   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:47:29.932561   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:47:29.932629   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:47:29.989153   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 08:47:29.997241   51643 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 08:47:30.008565   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1029 08:47:30.100996   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1029 08:47:30.148023   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1029 08:47:30.205555   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1029 08:47:30.248683   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1029 08:47:30.291195   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1029 08:47:30.333318   51643 kubeadm.go:401] StartCluster: {Name:ha-894836 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-894836 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 08:47:30.333452   51643 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:47:30.333514   51643 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:47:30.363953   51643 cri.go:89] found id: "e00d3f78d68d909f0332f199fdaf28199771c94a7e8d59cc954f4172c68c75fe"
	I1029 08:47:30.363975   51643 cri.go:89] found id: "a917c056972ea87cbf263c90930d10cb54f7d7c4f044215f8091e6dc6ec698fe"
	I1029 08:47:30.363981   51643 cri.go:89] found id: "67e5abbb69757832239af83063ef76100de2cec956cd044965ac792572fce7d8"
	I1029 08:47:30.363984   51643 cri.go:89] found id: "ffcbb54d6ce4436f5aec8bb9428ef3aa2b15fa9ee4079908fa14d7ee16acbc0c"
	I1029 08:47:30.363987   51643 cri.go:89] found id: "c5012e77d5995d67461a19df092ba7b0617af55e88a4f413560ffb01b7c5dd86"
	I1029 08:47:30.363991   51643 cri.go:89] found id: ""
	I1029 08:47:30.364037   51643 ssh_runner.go:195] Run: sudo runc list -f json
	W1029 08:47:30.375323   51643 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:47:30Z" level=error msg="open /run/runc: no such file or directory"
	I1029 08:47:30.375401   51643 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 08:47:30.385470   51643 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1029 08:47:30.385492   51643 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1029 08:47:30.385554   51643 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1029 08:47:30.394291   51643 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1029 08:47:30.394701   51643 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-894836" does not appear in /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 08:47:30.394803   51643 kubeconfig.go:62] /home/jenkins/minikube-integration/21800-2763/kubeconfig needs updating (will repair): [kubeconfig missing "ha-894836" cluster setting kubeconfig missing "ha-894836" context setting]
	I1029 08:47:30.395074   51643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:47:30.395601   51643 kapi.go:59] client config for ha-894836: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.crt", KeyFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.key", CAFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1029 08:47:30.396079   51643 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1029 08:47:30.396100   51643 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1029 08:47:30.396107   51643 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1029 08:47:30.396112   51643 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1029 08:47:30.396116   51643 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1029 08:47:30.396600   51643 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1029 08:47:30.396732   51643 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1029 08:47:30.405937   51643 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1029 08:47:30.405963   51643 kubeadm.go:602] duration metric: took 20.455594ms to restartPrimaryControlPlane
	I1029 08:47:30.405973   51643 kubeadm.go:403] duration metric: took 72.664815ms to StartCluster
	I1029 08:47:30.405988   51643 settings.go:142] acquiring lock: {Name:mkeba1c0d8e3656c2a05b6b6f1f81184498df216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:47:30.406062   51643 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 08:47:30.406653   51643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:47:30.406844   51643 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 08:47:30.406872   51643 start.go:242] waiting for startup goroutines ...
	I1029 08:47:30.406887   51643 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 08:47:30.407409   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:47:30.412586   51643 out.go:179] * Enabled addons: 
	I1029 08:47:30.415502   51643 addons.go:515] duration metric: took 8.615131ms for enable addons: enabled=[]
	I1029 08:47:30.415550   51643 start.go:247] waiting for cluster config update ...
	I1029 08:47:30.415564   51643 start.go:256] writing updated cluster config ...
	I1029 08:47:30.418838   51643 out.go:203] 
	I1029 08:47:30.421986   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:47:30.422163   51643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/config.json ...
	I1029 08:47:30.425622   51643 out.go:179] * Starting "ha-894836-m02" control-plane node in "ha-894836" cluster
	I1029 08:47:30.428500   51643 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 08:47:30.431446   51643 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 08:47:30.434321   51643 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:47:30.434374   51643 cache.go:59] Caching tarball of preloaded images
	I1029 08:47:30.434516   51643 preload.go:233] Found /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1029 08:47:30.434549   51643 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 08:47:30.434704   51643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/config.json ...
	I1029 08:47:30.434965   51643 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 08:47:30.469091   51643 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 08:47:30.469113   51643 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 08:47:30.469126   51643 cache.go:233] Successfully downloaded all kic artifacts
	I1029 08:47:30.469150   51643 start.go:360] acquireMachinesLock for ha-894836-m02: {Name:mkb930aec8192c14094c9c711c93e26847bf9202 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 08:47:30.469207   51643 start.go:364] duration metric: took 40.936µs to acquireMachinesLock for "ha-894836-m02"
	I1029 08:47:30.469228   51643 start.go:96] Skipping create...Using existing machine configuration
	I1029 08:47:30.469233   51643 fix.go:54] fixHost starting: m02
	I1029 08:47:30.469504   51643 cli_runner.go:164] Run: docker container inspect ha-894836-m02 --format={{.State.Status}}
	I1029 08:47:30.500880   51643 fix.go:112] recreateIfNeeded on ha-894836-m02: state=Stopped err=<nil>
	W1029 08:47:30.500905   51643 fix.go:138] unexpected machine state, will restart: <nil>
	I1029 08:47:30.506548   51643 out.go:252] * Restarting existing docker container for "ha-894836-m02" ...
	I1029 08:47:30.506637   51643 cli_runner.go:164] Run: docker start ha-894836-m02
	I1029 08:47:30.853634   51643 cli_runner.go:164] Run: docker container inspect ha-894836-m02 --format={{.State.Status}}
	I1029 08:47:30.880386   51643 kic.go:430] container "ha-894836-m02" state is running.
	I1029 08:47:30.880745   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836-m02
	I1029 08:47:30.905743   51643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/config.json ...
	I1029 08:47:30.905982   51643 machine.go:94] provisionDockerMachine start ...
	I1029 08:47:30.906048   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:30.933559   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:30.933904   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1029 08:47:30.933913   51643 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 08:47:30.934536   51643 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55068->127.0.0.1:32813: read: connection reset by peer
	I1029 08:47:34.203957   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-894836-m02
	
	I1029 08:47:34.204004   51643 ubuntu.go:182] provisioning hostname "ha-894836-m02"
	I1029 08:47:34.204076   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:34.234369   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:34.234685   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1029 08:47:34.234703   51643 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-894836-m02 && echo "ha-894836-m02" | sudo tee /etc/hostname
	I1029 08:47:34.542369   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-894836-m02
	
	I1029 08:47:34.542516   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:34.574456   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:34.574762   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1029 08:47:34.574779   51643 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-894836-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-894836-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-894836-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 08:47:34.827546   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 08:47:34.827578   51643 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-2763/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-2763/.minikube}
	I1029 08:47:34.827603   51643 ubuntu.go:190] setting up certificates
	I1029 08:47:34.827638   51643 provision.go:84] configureAuth start
	I1029 08:47:34.827714   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836-m02
	I1029 08:47:34.862097   51643 provision.go:143] copyHostCerts
	I1029 08:47:34.862139   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 08:47:34.862171   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem, removing ...
	I1029 08:47:34.862183   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 08:47:34.862258   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem (1082 bytes)
	I1029 08:47:34.862339   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 08:47:34.862362   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem, removing ...
	I1029 08:47:34.862367   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 08:47:34.862394   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem (1123 bytes)
	I1029 08:47:34.862440   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 08:47:34.862461   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem, removing ...
	I1029 08:47:34.862469   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 08:47:34.862496   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem (1679 bytes)
	I1029 08:47:34.862545   51643 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem org=jenkins.ha-894836-m02 san=[127.0.0.1 192.168.49.3 ha-894836-m02 localhost minikube]
	I1029 08:47:35.182658   51643 provision.go:177] copyRemoteCerts
	I1029 08:47:35.182745   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 08:47:35.182793   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:35.201881   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m02/id_rsa Username:docker}
	I1029 08:47:35.346712   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1029 08:47:35.346775   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 08:47:35.384129   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1029 08:47:35.384198   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1029 08:47:35.415588   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1029 08:47:35.415653   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 08:47:35.457021   51643 provision.go:87] duration metric: took 629.369458ms to configureAuth
	I1029 08:47:35.457058   51643 ubuntu.go:206] setting minikube options for container-runtime
	I1029 08:47:35.457378   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:47:35.457501   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:35.485978   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:35.486288   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1029 08:47:35.486309   51643 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 08:47:35.984048   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 08:47:35.984077   51643 machine.go:97] duration metric: took 5.078076838s to provisionDockerMachine
	I1029 08:47:35.984093   51643 start.go:293] postStartSetup for "ha-894836-m02" (driver="docker")
	I1029 08:47:35.984105   51643 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 08:47:35.984167   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 08:47:35.984212   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:36.009654   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m02/id_rsa Username:docker}
	I1029 08:47:36.121479   51643 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 08:47:36.125706   51643 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 08:47:36.125737   51643 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 08:47:36.125748   51643 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/addons for local assets ...
	I1029 08:47:36.125802   51643 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/files for local assets ...
	I1029 08:47:36.125883   51643 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> 45502.pem in /etc/ssl/certs
	I1029 08:47:36.125902   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> /etc/ssl/certs/45502.pem
	I1029 08:47:36.126006   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 08:47:36.133908   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /etc/ssl/certs/45502.pem (1708 bytes)
	I1029 08:47:36.152562   51643 start.go:296] duration metric: took 168.452944ms for postStartSetup
	I1029 08:47:36.152710   51643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 08:47:36.152752   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:36.170976   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m02/id_rsa Username:docker}
	I1029 08:47:36.276973   51643 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 08:47:36.287814   51643 fix.go:56] duration metric: took 5.818573756s for fixHost
	I1029 08:47:36.287841   51643 start.go:83] releasing machines lock for "ha-894836-m02", held for 5.818626179s
	I1029 08:47:36.287916   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836-m02
	I1029 08:47:36.328488   51643 out.go:179] * Found network options:
	I1029 08:47:36.331520   51643 out.go:179]   - NO_PROXY=192.168.49.2
	W1029 08:47:36.337513   51643 proxy.go:120] fail to check proxy env: Error ip not in block
	W1029 08:47:36.337573   51643 proxy.go:120] fail to check proxy env: Error ip not in block
	I1029 08:47:36.337636   51643 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 08:47:36.337690   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:36.337952   51643 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 08:47:36.338007   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:36.372705   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m02/id_rsa Username:docker}
	I1029 08:47:36.382161   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m02/id_rsa Username:docker}
	I1029 08:47:36.725650   51643 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 08:47:36.732748   51643 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 08:47:36.732831   51643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 08:47:36.748828   51643 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1029 08:47:36.748854   51643 start.go:496] detecting cgroup driver to use...
	I1029 08:47:36.748899   51643 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1029 08:47:36.748976   51643 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 08:47:36.774113   51643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 08:47:36.799926   51643 docker.go:218] disabling cri-docker service (if available) ...
	I1029 08:47:36.800009   51643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 08:47:36.821641   51643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 08:47:36.838818   51643 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 08:47:37.085073   51643 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 08:47:37.283501   51643 docker.go:234] disabling docker service ...
	I1029 08:47:37.283581   51643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 08:47:37.306704   51643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 08:47:37.329115   51643 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 08:47:37.528935   51643 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 08:47:37.724811   51643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 08:47:37.745385   51643 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 08:47:37.766616   51643 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 08:47:37.766687   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:37.777687   51643 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1029 08:47:37.777763   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:37.790547   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:37.805597   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:37.824888   51643 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 08:47:37.833592   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:37.847509   51643 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:37.857690   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:37.870682   51643 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 08:47:37.881416   51643 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 08:47:37.893784   51643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:47:38.130979   51643 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 08:47:38.346041   51643 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 08:47:38.346156   51643 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 08:47:38.350264   51643 start.go:564] Will wait 60s for crictl version
	I1029 08:47:38.350326   51643 ssh_runner.go:195] Run: which crictl
	I1029 08:47:38.353928   51643 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 08:47:38.381039   51643 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 08:47:38.381134   51643 ssh_runner.go:195] Run: crio --version
	I1029 08:47:38.409799   51643 ssh_runner.go:195] Run: crio --version
	I1029 08:47:38.443728   51643 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 08:47:38.446621   51643 out.go:179]   - env NO_PROXY=192.168.49.2
	I1029 08:47:38.449812   51643 cli_runner.go:164] Run: docker network inspect ha-894836 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 08:47:38.466711   51643 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1029 08:47:38.470765   51643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:47:38.480879   51643 mustload.go:66] Loading cluster: ha-894836
	I1029 08:47:38.481131   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:47:38.481434   51643 cli_runner.go:164] Run: docker container inspect ha-894836 --format={{.State.Status}}
	I1029 08:47:38.498248   51643 host.go:66] Checking if "ha-894836" exists ...
	I1029 08:47:38.498544   51643 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836 for IP: 192.168.49.3
	I1029 08:47:38.498558   51643 certs.go:195] generating shared ca certs ...
	I1029 08:47:38.498572   51643 certs.go:227] acquiring lock for ca certs: {Name:mk715611293338c39436fe9072c5ce0c2fb993a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:47:38.498695   51643 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key
	I1029 08:47:38.498747   51643 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key
	I1029 08:47:38.498755   51643 certs.go:257] generating profile certs ...
	I1029 08:47:38.498831   51643 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.key
	I1029 08:47:38.498903   51643 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key.d4a7ec17
	I1029 08:47:38.498943   51643 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key
	I1029 08:47:38.498962   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1029 08:47:38.498975   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1029 08:47:38.498991   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1029 08:47:38.499002   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1029 08:47:38.499012   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1029 08:47:38.499039   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1029 08:47:38.499054   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1029 08:47:38.499064   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1029 08:47:38.499118   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem (1338 bytes)
	W1029 08:47:38.499148   51643 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550_empty.pem, impossibly tiny 0 bytes
	I1029 08:47:38.499158   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem (1679 bytes)
	I1029 08:47:38.499189   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem (1082 bytes)
	I1029 08:47:38.499215   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem (1123 bytes)
	I1029 08:47:38.499239   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem (1679 bytes)
	I1029 08:47:38.499284   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem (1708 bytes)
	I1029 08:47:38.499315   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem -> /usr/share/ca-certificates/4550.pem
	I1029 08:47:38.499335   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> /usr/share/ca-certificates/45502.pem
	I1029 08:47:38.499349   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:47:38.499410   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:38.516805   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836/id_rsa Username:docker}
	I1029 08:47:38.612647   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1029 08:47:38.616561   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1029 08:47:38.624748   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1029 08:47:38.628258   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1029 08:47:38.637180   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1029 08:47:38.640891   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1029 08:47:38.650214   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1029 08:47:38.653972   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1029 08:47:38.662619   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1029 08:47:38.666317   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1029 08:47:38.674366   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1029 08:47:38.678199   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1029 08:47:38.686306   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 08:47:38.706856   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 08:47:38.724221   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 08:47:38.741317   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1029 08:47:38.759079   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1029 08:47:38.777104   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 08:47:38.794767   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 08:47:38.812149   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1029 08:47:38.830280   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem --> /usr/share/ca-certificates/4550.pem (1338 bytes)
	I1029 08:47:38.849527   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /usr/share/ca-certificates/45502.pem (1708 bytes)
	I1029 08:47:38.870347   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 08:47:38.890190   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1029 08:47:38.904271   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1029 08:47:38.917479   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1029 08:47:38.930520   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1029 08:47:38.945717   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1029 08:47:38.959276   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1029 08:47:38.972479   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1029 08:47:38.985067   51643 ssh_runner.go:195] Run: openssl version
	I1029 08:47:38.991454   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4550.pem && ln -fs /usr/share/ca-certificates/4550.pem /etc/ssl/certs/4550.pem"
	I1029 08:47:38.999996   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4550.pem
	I1029 08:47:39.004703   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:27 /usr/share/ca-certificates/4550.pem
	I1029 08:47:39.004780   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4550.pem
	I1029 08:47:39.050207   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4550.pem /etc/ssl/certs/51391683.0"
	I1029 08:47:39.058997   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/45502.pem && ln -fs /usr/share/ca-certificates/45502.pem /etc/ssl/certs/45502.pem"
	I1029 08:47:39.067821   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/45502.pem
	I1029 08:47:39.071762   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:27 /usr/share/ca-certificates/45502.pem
	I1029 08:47:39.071826   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/45502.pem
	I1029 08:47:39.113725   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/45502.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 08:47:39.121907   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 08:47:39.130312   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:47:39.134430   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:47:39.134513   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:47:39.176116   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 08:47:39.184143   51643 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 08:47:39.188071   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1029 08:47:39.229804   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1029 08:47:39.271125   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1029 08:47:39.314420   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1029 08:47:39.358357   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1029 08:47:39.404199   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1029 08:47:39.450657   51643 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1029 08:47:39.450775   51643 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-894836-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-894836 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 08:47:39.450808   51643 kube-vip.go:115] generating kube-vip config ...
	I1029 08:47:39.450861   51643 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1029 08:47:39.462795   51643 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1029 08:47:39.462879   51643 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1029 08:47:39.462977   51643 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 08:47:39.471222   51643 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 08:47:39.471296   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1029 08:47:39.480280   51643 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1029 08:47:39.493347   51643 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 08:47:39.506856   51643 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1029 08:47:39.521570   51643 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1029 08:47:39.525461   51643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:47:39.536266   51643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:47:39.680061   51643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 08:47:39.694883   51643 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 08:47:39.695320   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:47:39.699488   51643 out.go:179] * Verifying Kubernetes components...
	I1029 08:47:39.702679   51643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:47:39.837549   51643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 08:47:39.854606   51643 kapi.go:59] client config for ha-894836: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.crt", KeyFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.key", CAFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1029 08:47:39.854679   51643 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1029 08:47:39.854929   51643 node_ready.go:35] waiting up to 6m0s for node "ha-894836-m02" to be "Ready" ...
	W1029 08:47:49.857769   51643 node_ready.go:55] error getting node "ha-894836-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-894836-m02": net/http: TLS handshake timeout
	I1029 08:47:52.860254   51643 node_ready.go:49] node "ha-894836-m02" is "Ready"
	I1029 08:47:52.860290   51643 node_ready.go:38] duration metric: took 13.005340499s for node "ha-894836-m02" to be "Ready" ...
	I1029 08:47:52.860304   51643 api_server.go:52] waiting for apiserver process to appear ...
	I1029 08:47:52.860384   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:53.361211   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:53.860507   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:54.360916   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:54.860446   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:55.361159   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:55.860486   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:56.361306   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:56.860828   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:57.360541   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:57.860525   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:58.361238   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:58.374939   51643 api_server.go:72] duration metric: took 18.680010468s to wait for apiserver process to appear ...
	I1029 08:47:58.374971   51643 api_server.go:88] waiting for apiserver healthz status ...
	I1029 08:47:58.374992   51643 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1029 08:47:58.386476   51643 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1029 08:47:58.388170   51643 api_server.go:141] control plane version: v1.34.1
	I1029 08:47:58.388195   51643 api_server.go:131] duration metric: took 13.217297ms to wait for apiserver health ...
	I1029 08:47:58.388204   51643 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 08:47:58.397073   51643 system_pods.go:59] 26 kube-system pods found
	I1029 08:47:58.397155   51643 system_pods.go:61] "coredns-66bc5c9577-hhhxx" [e56e0269-e45a-43e3-a77e-177a0a756b40] Running
	I1029 08:47:58.397179   51643 system_pods.go:61] "coredns-66bc5c9577-vcp67" [f0f6bb79-544e-4586-aef9-3a82b1c78ecc] Running
	I1029 08:47:58.397217   51643 system_pods.go:61] "etcd-ha-894836" [5cd4d1f7-1dcb-4100-a31e-208ccc817ea3] Running
	I1029 08:47:58.397245   51643 system_pods.go:61] "etcd-ha-894836-m02" [2a90d177-9fd1-49e1-8c1e-79e3a1b5c413] Running
	I1029 08:47:58.397271   51643 system_pods.go:61] "etcd-ha-894836-m03" [6cd41576-e310-4635-9b94-f2d09bfe4222] Running
	I1029 08:47:58.397328   51643 system_pods.go:61] "kindnet-bjfp7" [dea5c187-a5ec-4d74-90e1-bf9c51c5bb6f] Running
	I1029 08:47:58.397356   51643 system_pods.go:61] "kindnet-hg69g" [8938d12e-502d-4a8c-84a5-018253ac53ba] Running
	I1029 08:47:58.397405   51643 system_pods.go:61] "kindnet-q8tvb" [1da0da6b-7d7f-45c0-9dab-afd839431062] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1029 08:47:58.397432   51643 system_pods.go:61] "kindnet-qkxpk" [a5470a24-fa80-424b-b421-001526b2593b] Running
	I1029 08:47:58.397457   51643 system_pods.go:61] "kube-apiserver-ha-894836" [b94cee38-e526-4d61-a186-f91144703115] Running
	I1029 08:47:58.397494   51643 system_pods.go:61] "kube-apiserver-ha-894836-m02" [c3caf692-d34f-4888-a75f-456b448a2676] Running
	I1029 08:47:58.397520   51643 system_pods.go:61] "kube-apiserver-ha-894836-m03" [8c8e2229-e880-40d7-824c-cb83b74bb8f5] Running
	I1029 08:47:58.397554   51643 system_pods.go:61] "kube-controller-manager-ha-894836" [310aa2d6-f3db-4980-bd00-c377cfdc9246] Running
	I1029 08:47:58.397597   51643 system_pods.go:61] "kube-controller-manager-ha-894836-m02" [d0f22e91-0e21-46b7-b40c-4b6837e3595f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 08:47:58.397620   51643 system_pods.go:61] "kube-controller-manager-ha-894836-m03" [455529ad-15de-4b00-b3f8-389c14c89a53] Running
	I1029 08:47:58.397668   51643 system_pods.go:61] "kube-proxy-59nqf" [849e97d0-893f-428e-9146-cd4ddf60b718] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1029 08:47:58.397697   51643 system_pods.go:61] "kube-proxy-bprsj" [927e6e10-9052-4c58-8eee-98a7e1c134dc] Running
	I1029 08:47:58.397724   51643 system_pods.go:61] "kube-proxy-gd8g6" [cbfb04f1-2bc7-4683-b99f-079f27c7b5e2] Running
	I1029 08:47:58.397756   51643 system_pods.go:61] "kube-proxy-gxrz7" [b0ef623f-f7ad-4b5a-8d1e-b08dc6d1ce80] Running
	I1029 08:47:58.397780   51643 system_pods.go:61] "kube-scheduler-ha-894836" [da7be70f-32ae-474c-a25a-a4e7a6e02653] Running
	I1029 08:47:58.397802   51643 system_pods.go:61] "kube-scheduler-ha-894836-m02" [cd22d36a-aab6-49ba-bbad-376526393820] Running
	I1029 08:47:58.397842   51643 system_pods.go:61] "kube-scheduler-ha-894836-m03" [5c88adc4-d9d3-42d1-aac9-550c356f755f] Running
	I1029 08:47:58.397867   51643 system_pods.go:61] "kube-vip-ha-894836" [3304e5b5-10a5-4362-855f-966f12e19513] Running
	I1029 08:47:58.397978   51643 system_pods.go:61] "kube-vip-ha-894836-m02" [79aaa612-a92e-4c41-a92a-c4bc904d64b2] Running
	I1029 08:47:58.398003   51643 system_pods.go:61] "kube-vip-ha-894836-m03" [1ce7bac8-8c0a-41fc-9cc9-db0417bd4da7] Running
	I1029 08:47:58.398030   51643 system_pods.go:61] "storage-provisioner" [74a003fb-b5cc-4ffa-8560-fd41d1257bd6] Running
	I1029 08:47:58.398069   51643 system_pods.go:74] duration metric: took 9.856974ms to wait for pod list to return data ...
	I1029 08:47:58.398098   51643 default_sa.go:34] waiting for default service account to be created ...
	I1029 08:47:58.402325   51643 default_sa.go:45] found service account: "default"
	I1029 08:47:58.402401   51643 default_sa.go:55] duration metric: took 4.283713ms for default service account to be created ...
	I1029 08:47:58.402426   51643 system_pods.go:116] waiting for k8s-apps to be running ...
	I1029 08:47:58.411486   51643 system_pods.go:86] 26 kube-system pods found
	I1029 08:47:58.411568   51643 system_pods.go:89] "coredns-66bc5c9577-hhhxx" [e56e0269-e45a-43e3-a77e-177a0a756b40] Running
	I1029 08:47:58.411592   51643 system_pods.go:89] "coredns-66bc5c9577-vcp67" [f0f6bb79-544e-4586-aef9-3a82b1c78ecc] Running
	I1029 08:47:58.411631   51643 system_pods.go:89] "etcd-ha-894836" [5cd4d1f7-1dcb-4100-a31e-208ccc817ea3] Running
	I1029 08:47:58.411661   51643 system_pods.go:89] "etcd-ha-894836-m02" [2a90d177-9fd1-49e1-8c1e-79e3a1b5c413] Running
	I1029 08:47:58.411686   51643 system_pods.go:89] "etcd-ha-894836-m03" [6cd41576-e310-4635-9b94-f2d09bfe4222] Running
	I1029 08:47:58.411725   51643 system_pods.go:89] "kindnet-bjfp7" [dea5c187-a5ec-4d74-90e1-bf9c51c5bb6f] Running
	I1029 08:47:58.411755   51643 system_pods.go:89] "kindnet-hg69g" [8938d12e-502d-4a8c-84a5-018253ac53ba] Running
	I1029 08:47:58.411785   51643 system_pods.go:89] "kindnet-q8tvb" [1da0da6b-7d7f-45c0-9dab-afd839431062] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1029 08:47:58.411826   51643 system_pods.go:89] "kindnet-qkxpk" [a5470a24-fa80-424b-b421-001526b2593b] Running
	I1029 08:47:58.411849   51643 system_pods.go:89] "kube-apiserver-ha-894836" [b94cee38-e526-4d61-a186-f91144703115] Running
	I1029 08:47:58.411887   51643 system_pods.go:89] "kube-apiserver-ha-894836-m02" [c3caf692-d34f-4888-a75f-456b448a2676] Running
	I1029 08:47:58.411913   51643 system_pods.go:89] "kube-apiserver-ha-894836-m03" [8c8e2229-e880-40d7-824c-cb83b74bb8f5] Running
	I1029 08:47:58.411942   51643 system_pods.go:89] "kube-controller-manager-ha-894836" [310aa2d6-f3db-4980-bd00-c377cfdc9246] Running
	I1029 08:47:58.411982   51643 system_pods.go:89] "kube-controller-manager-ha-894836-m02" [d0f22e91-0e21-46b7-b40c-4b6837e3595f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 08:47:58.412004   51643 system_pods.go:89] "kube-controller-manager-ha-894836-m03" [455529ad-15de-4b00-b3f8-389c14c89a53] Running
	I1029 08:47:58.412046   51643 system_pods.go:89] "kube-proxy-59nqf" [849e97d0-893f-428e-9146-cd4ddf60b718] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1029 08:47:58.412074   51643 system_pods.go:89] "kube-proxy-bprsj" [927e6e10-9052-4c58-8eee-98a7e1c134dc] Running
	I1029 08:47:58.412099   51643 system_pods.go:89] "kube-proxy-gd8g6" [cbfb04f1-2bc7-4683-b99f-079f27c7b5e2] Running
	I1029 08:47:58.412131   51643 system_pods.go:89] "kube-proxy-gxrz7" [b0ef623f-f7ad-4b5a-8d1e-b08dc6d1ce80] Running
	I1029 08:47:58.412157   51643 system_pods.go:89] "kube-scheduler-ha-894836" [da7be70f-32ae-474c-a25a-a4e7a6e02653] Running
	I1029 08:47:58.412180   51643 system_pods.go:89] "kube-scheduler-ha-894836-m02" [cd22d36a-aab6-49ba-bbad-376526393820] Running
	I1029 08:47:58.412217   51643 system_pods.go:89] "kube-scheduler-ha-894836-m03" [5c88adc4-d9d3-42d1-aac9-550c356f755f] Running
	I1029 08:47:58.412244   51643 system_pods.go:89] "kube-vip-ha-894836" [3304e5b5-10a5-4362-855f-966f12e19513] Running
	I1029 08:47:58.412269   51643 system_pods.go:89] "kube-vip-ha-894836-m02" [79aaa612-a92e-4c41-a92a-c4bc904d64b2] Running
	I1029 08:47:58.412360   51643 system_pods.go:89] "kube-vip-ha-894836-m03" [1ce7bac8-8c0a-41fc-9cc9-db0417bd4da7] Running
	I1029 08:47:58.412396   51643 system_pods.go:89] "storage-provisioner" [74a003fb-b5cc-4ffa-8560-fd41d1257bd6] Running
	I1029 08:47:58.412419   51643 system_pods.go:126] duration metric: took 9.970092ms to wait for k8s-apps to be running ...
	I1029 08:47:58.412443   51643 system_svc.go:44] waiting for kubelet service to be running ....
	I1029 08:47:58.412532   51643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 08:47:58.430648   51643 system_svc.go:56] duration metric: took 18.183914ms WaitForService to wait for kubelet
	I1029 08:47:58.430727   51643 kubeadm.go:587] duration metric: took 18.735792001s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 08:47:58.430763   51643 node_conditions.go:102] verifying NodePressure condition ...
	I1029 08:47:58.435505   51643 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1029 08:47:58.435585   51643 node_conditions.go:123] node cpu capacity is 2
	I1029 08:47:58.435615   51643 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1029 08:47:58.435636   51643 node_conditions.go:123] node cpu capacity is 2
	I1029 08:47:58.435667   51643 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1029 08:47:58.435691   51643 node_conditions.go:123] node cpu capacity is 2
	I1029 08:47:58.435709   51643 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1029 08:47:58.435750   51643 node_conditions.go:123] node cpu capacity is 2
	I1029 08:47:58.435776   51643 node_conditions.go:105] duration metric: took 4.978006ms to run NodePressure ...
	I1029 08:47:58.435804   51643 start.go:242] waiting for startup goroutines ...
	I1029 08:47:58.435853   51643 start.go:256] writing updated cluster config ...
	I1029 08:47:58.439739   51643 out.go:203] 
	I1029 08:47:58.443690   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:47:58.443882   51643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/config.json ...
	I1029 08:47:58.447597   51643 out.go:179] * Starting "ha-894836-m03" control-plane node in "ha-894836" cluster
	I1029 08:47:58.451296   51643 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 08:47:58.454468   51643 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 08:47:58.457455   51643 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:47:58.457578   51643 cache.go:59] Caching tarball of preloaded images
	I1029 08:47:58.457532   51643 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 08:47:58.457963   51643 preload.go:233] Found /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1029 08:47:58.457997   51643 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 08:47:58.458193   51643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/config.json ...
	I1029 08:47:58.484925   51643 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 08:47:58.484945   51643 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 08:47:58.484957   51643 cache.go:233] Successfully downloaded all kic artifacts
	I1029 08:47:58.484981   51643 start.go:360] acquireMachinesLock for ha-894836-m03: {Name:mkff6279e1eccd0127b32c0d6857db9b3fa3dac9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 08:47:58.485031   51643 start.go:364] duration metric: took 36.152µs to acquireMachinesLock for "ha-894836-m03"
	I1029 08:47:58.485050   51643 start.go:96] Skipping create...Using existing machine configuration
	I1029 08:47:58.485055   51643 fix.go:54] fixHost starting: m03
	I1029 08:47:58.485336   51643 cli_runner.go:164] Run: docker container inspect ha-894836-m03 --format={{.State.Status}}
	I1029 08:47:58.517723   51643 fix.go:112] recreateIfNeeded on ha-894836-m03: state=Stopped err=<nil>
	W1029 08:47:58.517747   51643 fix.go:138] unexpected machine state, will restart: <nil>
	I1029 08:47:58.521056   51643 out.go:252] * Restarting existing docker container for "ha-894836-m03" ...
	I1029 08:47:58.521146   51643 cli_runner.go:164] Run: docker start ha-894836-m03
	I1029 08:47:58.923330   51643 cli_runner.go:164] Run: docker container inspect ha-894836-m03 --format={{.State.Status}}
	I1029 08:47:58.955597   51643 kic.go:430] container "ha-894836-m03" state is running.
	I1029 08:47:58.955975   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836-m03
	I1029 08:47:58.985436   51643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/config.json ...
	I1029 08:47:58.985727   51643 machine.go:94] provisionDockerMachine start ...
	I1029 08:47:58.985800   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:47:59.021071   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:59.021382   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1029 08:47:59.021392   51643 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 08:47:59.022242   51643 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1029 08:48:02.369899   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-894836-m03
	
	I1029 08:48:02.369983   51643 ubuntu.go:182] provisioning hostname "ha-894836-m03"
	I1029 08:48:02.370089   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:48:02.396111   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:48:02.396431   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1029 08:48:02.396444   51643 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-894836-m03 && echo "ha-894836-m03" | sudo tee /etc/hostname
	I1029 08:48:02.706986   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-894836-m03
	
	I1029 08:48:02.707060   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:48:02.732902   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:48:02.733206   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1029 08:48:02.733231   51643 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-894836-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-894836-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-894836-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 08:48:03.018167   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 08:48:03.018188   51643 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-2763/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-2763/.minikube}
	I1029 08:48:03.018211   51643 ubuntu.go:190] setting up certificates
	I1029 08:48:03.018221   51643 provision.go:84] configureAuth start
	I1029 08:48:03.018284   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836-m03
	I1029 08:48:03.051408   51643 provision.go:143] copyHostCerts
	I1029 08:48:03.051450   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 08:48:03.051486   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem, removing ...
	I1029 08:48:03.051493   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 08:48:03.051568   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem (1082 bytes)
	I1029 08:48:03.051644   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 08:48:03.051661   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem, removing ...
	I1029 08:48:03.051666   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 08:48:03.051690   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem (1123 bytes)
	I1029 08:48:03.051728   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 08:48:03.051744   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem, removing ...
	I1029 08:48:03.051748   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 08:48:03.051770   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem (1679 bytes)
	I1029 08:48:03.051815   51643 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem org=jenkins.ha-894836-m03 san=[127.0.0.1 192.168.49.4 ha-894836-m03 localhost minikube]
	I1029 08:48:04.283916   51643 provision.go:177] copyRemoteCerts
	I1029 08:48:04.283985   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 08:48:04.284031   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:48:04.301428   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m03/id_rsa Username:docker}
	I1029 08:48:04.461287   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1029 08:48:04.461367   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 08:48:04.496816   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1029 08:48:04.496881   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1029 08:48:04.527177   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1029 08:48:04.527250   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 08:48:04.556555   51643 provision.go:87] duration metric: took 1.5383197s to configureAuth
	I1029 08:48:04.556585   51643 ubuntu.go:206] setting minikube options for container-runtime
	I1029 08:48:04.556817   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:48:04.556919   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:48:04.581700   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:48:04.581999   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1029 08:48:04.582018   51643 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 08:48:05.181543   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 08:48:05.181567   51643 machine.go:97] duration metric: took 6.195829937s to provisionDockerMachine
	I1029 08:48:05.181589   51643 start.go:293] postStartSetup for "ha-894836-m03" (driver="docker")
	I1029 08:48:05.181600   51643 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 08:48:05.181674   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 08:48:05.181722   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:48:05.207592   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m03/id_rsa Username:docker}
	I1029 08:48:05.322834   51643 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 08:48:05.327694   51643 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 08:48:05.327775   51643 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 08:48:05.327808   51643 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/addons for local assets ...
	I1029 08:48:05.327899   51643 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/files for local assets ...
	I1029 08:48:05.328050   51643 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> 45502.pem in /etc/ssl/certs
	I1029 08:48:05.328079   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> /etc/ssl/certs/45502.pem
	I1029 08:48:05.328256   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 08:48:05.343080   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /etc/ssl/certs/45502.pem (1708 bytes)
	I1029 08:48:05.371323   51643 start.go:296] duration metric: took 189.718932ms for postStartSetup
	I1029 08:48:05.371417   51643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 08:48:05.371455   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:48:05.397947   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m03/id_rsa Username:docker}
	I1029 08:48:05.541458   51643 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 08:48:05.561976   51643 fix.go:56] duration metric: took 7.076913817s for fixHost
	I1029 08:48:05.562004   51643 start.go:83] releasing machines lock for "ha-894836-m03", held for 7.076964665s
	I1029 08:48:05.562072   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836-m03
	I1029 08:48:05.600883   51643 out.go:179] * Found network options:
	I1029 08:48:05.604417   51643 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1029 08:48:05.607757   51643 proxy.go:120] fail to check proxy env: Error ip not in block
	W1029 08:48:05.607793   51643 proxy.go:120] fail to check proxy env: Error ip not in block
	W1029 08:48:05.607816   51643 proxy.go:120] fail to check proxy env: Error ip not in block
	W1029 08:48:05.607826   51643 proxy.go:120] fail to check proxy env: Error ip not in block
	I1029 08:48:05.607887   51643 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 08:48:05.607928   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:48:05.607983   51643 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 08:48:05.608041   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:48:05.654947   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m03/id_rsa Username:docker}
	I1029 08:48:05.658008   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m03/id_rsa Username:docker}
	I1029 08:48:06.130162   51643 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 08:48:06.143305   51643 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 08:48:06.143421   51643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 08:48:06.167460   51643 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1029 08:48:06.167489   51643 start.go:496] detecting cgroup driver to use...
	I1029 08:48:06.167523   51643 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1029 08:48:06.167572   51643 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 08:48:06.213970   51643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 08:48:06.251029   51643 docker.go:218] disabling cri-docker service (if available) ...
	I1029 08:48:06.251087   51643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 08:48:06.290080   51643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 08:48:06.327709   51643 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 08:48:06.726326   51643 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 08:48:07.139091   51643 docker.go:234] disabling docker service ...
	I1029 08:48:07.139182   51643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 08:48:07.178202   51643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 08:48:07.209433   51643 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 08:48:07.608392   51643 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 08:48:08.086947   51643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 08:48:08.121769   51643 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 08:48:08.184236   51643 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 08:48:08.184326   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:48:08.215828   51643 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1029 08:48:08.215914   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:48:08.238638   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:48:08.269033   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:48:08.295262   51643 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 08:48:08.331399   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:48:08.356819   51643 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:48:08.389668   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:48:08.403860   51643 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 08:48:08.423244   51643 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 08:48:08.437579   51643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:48:08.832580   51643 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 08:49:39.275381   51643 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.442758035s)
	I1029 08:49:39.275412   51643 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 08:49:39.275483   51643 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 08:49:39.279771   51643 start.go:564] Will wait 60s for crictl version
	I1029 08:49:39.279855   51643 ssh_runner.go:195] Run: which crictl
	I1029 08:49:39.284759   51643 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 08:49:39.334853   51643 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 08:49:39.334984   51643 ssh_runner.go:195] Run: crio --version
	I1029 08:49:39.371804   51643 ssh_runner.go:195] Run: crio --version
	I1029 08:49:39.405984   51643 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 08:49:39.412429   51643 out.go:179]   - env NO_PROXY=192.168.49.2
	I1029 08:49:39.415504   51643 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1029 08:49:39.418469   51643 cli_runner.go:164] Run: docker network inspect ha-894836 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 08:49:39.435673   51643 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1029 08:49:39.440794   51643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:49:39.451208   51643 mustload.go:66] Loading cluster: ha-894836
	I1029 08:49:39.451471   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:49:39.451781   51643 cli_runner.go:164] Run: docker container inspect ha-894836 --format={{.State.Status}}
	I1029 08:49:39.468915   51643 host.go:66] Checking if "ha-894836" exists ...
	I1029 08:49:39.469188   51643 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836 for IP: 192.168.49.4
	I1029 08:49:39.469202   51643 certs.go:195] generating shared ca certs ...
	I1029 08:49:39.469216   51643 certs.go:227] acquiring lock for ca certs: {Name:mk715611293338c39436fe9072c5ce0c2fb993a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:49:39.469334   51643 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key
	I1029 08:49:39.469401   51643 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key
	I1029 08:49:39.469413   51643 certs.go:257] generating profile certs ...
	I1029 08:49:39.469489   51643 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.key
	I1029 08:49:39.469559   51643 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key.761eb988
	I1029 08:49:39.469601   51643 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key
	I1029 08:49:39.469613   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1029 08:49:39.469625   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1029 08:49:39.469641   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1029 08:49:39.469654   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1029 08:49:39.469666   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1029 08:49:39.469679   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1029 08:49:39.469694   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1029 08:49:39.469705   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1029 08:49:39.469761   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem (1338 bytes)
	W1029 08:49:39.469793   51643 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550_empty.pem, impossibly tiny 0 bytes
	I1029 08:49:39.469805   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem (1679 bytes)
	I1029 08:49:39.469829   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem (1082 bytes)
	I1029 08:49:39.469858   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem (1123 bytes)
	I1029 08:49:39.469887   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem (1679 bytes)
	I1029 08:49:39.469934   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem (1708 bytes)
	I1029 08:49:39.469964   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> /usr/share/ca-certificates/45502.pem
	I1029 08:49:39.469983   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:49:39.469994   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem -> /usr/share/ca-certificates/4550.pem
	I1029 08:49:39.470057   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:49:39.488996   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836/id_rsa Username:docker}
	I1029 08:49:39.588688   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1029 08:49:39.592443   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1029 08:49:39.600773   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1029 08:49:39.604466   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1029 08:49:39.613528   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1029 08:49:39.617112   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1029 08:49:39.625577   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1029 08:49:39.629278   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1029 08:49:39.637493   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1029 08:49:39.641121   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1029 08:49:39.650070   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1029 08:49:39.653954   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1029 08:49:39.662931   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 08:49:39.685107   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 08:49:39.705459   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 08:49:39.724858   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1029 08:49:39.743556   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1029 08:49:39.762456   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 08:49:39.781042   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 08:49:39.803894   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1029 08:49:39.827899   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /usr/share/ca-certificates/45502.pem (1708 bytes)
	I1029 08:49:39.848693   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 08:49:39.875006   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem --> /usr/share/ca-certificates/4550.pem (1338 bytes)
	I1029 08:49:39.895980   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1029 08:49:39.909585   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1029 08:49:39.922536   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1029 08:49:39.935718   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1029 08:49:39.950308   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1029 08:49:39.965160   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1029 08:49:39.979271   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1029 08:49:39.992671   51643 ssh_runner.go:195] Run: openssl version
	I1029 08:49:39.999106   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/45502.pem && ln -fs /usr/share/ca-certificates/45502.pem /etc/ssl/certs/45502.pem"
	I1029 08:49:40.009754   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/45502.pem
	I1029 08:49:40.016736   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:27 /usr/share/ca-certificates/45502.pem
	I1029 08:49:40.016877   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/45502.pem
	I1029 08:49:40.067934   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/45502.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 08:49:40.077186   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 08:49:40.086864   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:49:40.091154   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:49:40.091257   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:49:40.134215   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 08:49:40.142049   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4550.pem && ln -fs /usr/share/ca-certificates/4550.pem /etc/ssl/certs/4550.pem"
	I1029 08:49:40.150815   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4550.pem
	I1029 08:49:40.154732   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:27 /usr/share/ca-certificates/4550.pem
	I1029 08:49:40.154796   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4550.pem
	I1029 08:49:40.196358   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4550.pem /etc/ssl/certs/51391683.0"
	I1029 08:49:40.204753   51643 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 08:49:40.208825   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1029 08:49:40.251130   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1029 08:49:40.293659   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1029 08:49:40.335303   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1029 08:49:40.378403   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1029 08:49:40.419111   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1029 08:49:40.459947   51643 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1029 08:49:40.460045   51643 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-894836-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-894836 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 08:49:40.460074   51643 kube-vip.go:115] generating kube-vip config ...
	I1029 08:49:40.460122   51643 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1029 08:49:40.472263   51643 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1029 08:49:40.472402   51643 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1029 08:49:40.472491   51643 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 08:49:40.482442   51643 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 08:49:40.482527   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1029 08:49:40.491244   51643 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1029 08:49:40.509334   51643 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 08:49:40.522741   51643 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1029 08:49:40.543511   51643 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1029 08:49:40.549027   51643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:49:40.559626   51643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:49:40.700906   51643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 08:49:40.716131   51643 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 08:49:40.716494   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:49:40.720440   51643 out.go:179] * Verifying Kubernetes components...
	I1029 08:49:40.723093   51643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:49:40.849270   51643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 08:49:40.870801   51643 kapi.go:59] client config for ha-894836: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.crt", KeyFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.key", CAFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1029 08:49:40.870875   51643 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1029 08:49:40.871137   51643 node_ready.go:35] waiting up to 6m0s for node "ha-894836-m03" to be "Ready" ...
	W1029 08:49:42.878542   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:49:45.376167   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:49:47.875546   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:49:49.879197   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:49:52.374859   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:49:54.874674   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:49:56.875642   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:49:59.385971   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:01.874925   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:04.375281   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:06.875417   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:08.877527   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:11.374735   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:13.374773   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:15.875423   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:18.374307   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:20.375009   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:22.875458   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:24.875734   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:27.374436   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:29.375591   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:31.875678   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:33.876408   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:36.375279   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:38.875405   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:40.875687   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:43.375139   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:45.376751   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:47.874681   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:50.375198   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:52.874746   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:54.875461   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:57.374875   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:59.375081   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:01.874956   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:03.875571   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:05.875856   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:07.875956   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:10.374910   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:12.375300   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:14.874455   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:16.874501   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:18.881741   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:21.374575   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:23.375182   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:25.875630   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:28.375397   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:30.376726   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:32.874952   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:35.375371   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:37.875672   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:40.374584   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:42.375166   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:44.375299   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:46.875496   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:48.876305   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:51.375111   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:53.375554   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:55.874828   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:58.374446   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:00.391777   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:02.875635   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:05.374696   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:07.875548   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:10.374764   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:12.375076   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:14.874580   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:16.875240   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:18.880605   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:21.375072   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:23.875108   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:26.375196   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:28.375284   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:30.875177   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:32.875570   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:35.374573   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:37.374747   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:39.375982   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:41.875595   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:44.377104   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:46.875402   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:48.877198   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:51.375357   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:53.874734   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:55.875011   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:57.875521   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:00.380590   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:02.876012   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:05.375714   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:07.875383   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:10.374415   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:12.376491   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:14.875713   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:17.375204   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:19.377537   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:21.877439   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:24.375155   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:26.874635   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:28.881623   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:31.374848   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:33.374930   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:35.875771   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:38.375835   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:40.875765   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:43.375167   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:45.874879   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:47.878546   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:50.375661   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:52.875435   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:55.375646   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:57.874489   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:59.875624   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:02.375174   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:04.874940   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:07.375497   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:09.875063   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:11.875223   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:13.875266   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:16.378660   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:18.883945   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:21.374606   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:23.376495   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:25.875496   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:28.375564   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:30.875734   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:33.375292   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:35.875496   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:38.375495   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:40.874844   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:42.874893   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:45.376206   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:47.875511   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:50.375400   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:52.875571   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:55.374747   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:57.374957   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:59.375343   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:01.876012   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:04.374336   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:06.374603   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:08.875609   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:11.375178   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:13.375447   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:15.376425   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:17.874841   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:20.375318   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:22.874543   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:25.375289   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:27.874901   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:30.374710   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:32.375028   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:34.375632   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:36.875017   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:38.877472   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	I1029 08:55:40.871415   51643 node_ready.go:38] duration metric: took 6m0.000252794s for node "ha-894836-m03" to be "Ready" ...
	I1029 08:55:40.874909   51643 out.go:203] 
	W1029 08:55:40.877827   51643 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1029 08:55:40.877849   51643 out.go:285] * 
	W1029 08:55:40.880012   51643 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:55:40.882934   51643 out.go:203] 
	
	
	==> CRI-O <==
	Oct 29 08:48:31 ha-894836 crio[665]: time="2025-10-29T08:48:31.405473293Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=32150040-13c5-4993-9d53-1d8c8b936dae name=/runtime.v1.ImageService/ImageStatus
	Oct 29 08:48:31 ha-894836 crio[665]: time="2025-10-29T08:48:31.406558171Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=f533fe3b-c6cb-4daf-8190-4ca198dc0664 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 08:48:31 ha-894836 crio[665]: time="2025-10-29T08:48:31.406654286Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 08:48:31 ha-894836 crio[665]: time="2025-10-29T08:48:31.411556435Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 08:48:31 ha-894836 crio[665]: time="2025-10-29T08:48:31.412037753Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5eff9b5708aaba3e35120e5c17dfcd8d88e7135226bba9538b85d1bdd299f814/merged/etc/passwd: no such file or directory"
	Oct 29 08:48:31 ha-894836 crio[665]: time="2025-10-29T08:48:31.41219942Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5eff9b5708aaba3e35120e5c17dfcd8d88e7135226bba9538b85d1bdd299f814/merged/etc/group: no such file or directory"
	Oct 29 08:48:31 ha-894836 crio[665]: time="2025-10-29T08:48:31.412764686Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 08:48:31 ha-894836 crio[665]: time="2025-10-29T08:48:31.435659025Z" level=info msg="Created container 3d37627bfbc5fda963a0c849ee3de0fd939c938a1ae880f8853db63e9ec5b57b: kube-system/storage-provisioner/storage-provisioner" id=f533fe3b-c6cb-4daf-8190-4ca198dc0664 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 08:48:31 ha-894836 crio[665]: time="2025-10-29T08:48:31.436586896Z" level=info msg="Starting container: 3d37627bfbc5fda963a0c849ee3de0fd939c938a1ae880f8853db63e9ec5b57b" id=d8955627-909b-475a-944e-ac1a3b5d4e96 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 08:48:31 ha-894836 crio[665]: time="2025-10-29T08:48:31.43975017Z" level=info msg="Started container" PID=1368 containerID=3d37627bfbc5fda963a0c849ee3de0fd939c938a1ae880f8853db63e9ec5b57b description=kube-system/storage-provisioner/storage-provisioner id=d8955627-909b-475a-944e-ac1a3b5d4e96 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c6058cbca67d071839a960a649f1de901cec31652fc327f56667100a324eb7e5
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.916829212Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.920294611Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.920371732Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.920393944Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.926141566Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.926179974Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.92620596Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.930512623Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.930548259Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.930572817Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.934035459Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.934075393Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.934102561Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.937441337Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.937480057Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	3d37627bfbc5f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Running             storage-provisioner       2                   c6058cbca67d0       storage-provisioner                 kube-system
	7e6beb43bb335       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   9aa14b66630e2       coredns-66bc5c9577-hhhxx            kube-system
	69e1be8c137ed       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Exited              storage-provisioner       1                   c6058cbca67d0       storage-provisioner                 kube-system
	e7956795c58f4       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   7 minutes ago       Running             busybox                   1                   541c10c0d9e9d       busybox-7b57f96db7-hl8ll            default
	4ac7e4e48f2d6       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 minutes ago       Running             kube-proxy                1                   97069c7ad741e       kube-proxy-gxrz7                    kube-system
	b59e1fb940c3f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 minutes ago       Running             kindnet-cni               1                   662869c52a2c8       kindnet-bjfp7                       kube-system
	f4d98e59447db       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   fb9556b60baf7       coredns-66bc5c9577-vcp67            kube-system
	e00d3f78d68d9       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   8 minutes ago       Running             kube-apiserver            1                   27c7e21f538bd       kube-apiserver-ha-894836            kube-system
	a917c056972ea       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   8 minutes ago       Running             kube-vip                  0                   cb582940fcc64       kube-vip-ha-894836                  kube-system
	67e5abbb69757       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   8 minutes ago       Running             kube-scheduler            1                   615eac85d59b6       kube-scheduler-ha-894836            kube-system
	ffcbb54d6ce44       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   8 minutes ago       Running             kube-controller-manager   1                   3a2ab0bee942f       kube-controller-manager-ha-894836   kube-system
	c5012e77d5995       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   8 minutes ago       Running             etcd                      1                   0d7cccc011f06       etcd-ha-894836                      kube-system
	
	
	==> coredns [7e6beb43bb33582fbfaddc581b0968352916d1ba99aca6791d37ebb24f48a116] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34440 - 7065 "HINFO IN 8445725135211176428.1755746847705524405. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013494166s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f4d98e59447db0183f40bf805b64d3d4db57ead54fe530999384509e544cc7d9] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42938 - 8700 "HINFO IN 4442209450395311171.7481964028264372801. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023094613s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-894836
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-894836
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=ha-894836
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T08_41_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 08:41:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-894836
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 08:55:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 08:55:23 +0000   Wed, 29 Oct 2025 08:41:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 08:55:23 +0000   Wed, 29 Oct 2025 08:41:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 08:55:23 +0000   Wed, 29 Oct 2025 08:41:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 08:55:23 +0000   Wed, 29 Oct 2025 08:42:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-894836
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                cd4b1ccd-742f-4f33-9ae4-c8bc3e629f16
	  Boot ID:                    dcb5a4aa-84b0-4edf-aeb7-a96ccf6c0882
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-hl8ll             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-hhhxx             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 coredns-66bc5c9577-vcp67             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 etcd-ha-894836                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-bjfp7                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-894836             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-894836    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-gxrz7                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-894836             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-894836                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m43s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m41s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     14m (x8 over 14m)      kubelet          Node ha-894836 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-894836 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-894836 status is now: NodeHasSufficientMemory
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-894836 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-894836 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-894836 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           13m                    node-controller  Node ha-894836 event: Registered Node ha-894836 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-894836 event: Registered Node ha-894836 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-894836 status is now: NodeReady
	  Normal   RegisteredNode           11m                    node-controller  Node ha-894836 event: Registered Node ha-894836 in Controller
	  Normal   RegisteredNode           8m43s                  node-controller  Node ha-894836 event: Registered Node ha-894836 in Controller
	  Normal   Starting                 8m14s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m14s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m13s (x8 over 8m14s)  kubelet          Node ha-894836 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m13s (x8 over 8m14s)  kubelet          Node ha-894836 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m13s (x8 over 8m14s)  kubelet          Node ha-894836 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m47s                  node-controller  Node ha-894836 event: Registered Node ha-894836 in Controller
	  Normal   RegisteredNode           7m32s                  node-controller  Node ha-894836 event: Registered Node ha-894836 in Controller
	
	
	Name:               ha-894836-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-894836-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=ha-894836
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_29T08_42_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 08:42:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-894836-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 08:55:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 08:55:15 +0000   Wed, 29 Oct 2025 08:42:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 08:55:15 +0000   Wed, 29 Oct 2025 08:42:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 08:55:15 +0000   Wed, 29 Oct 2025 08:42:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 08:55:15 +0000   Wed, 29 Oct 2025 08:43:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-894836-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                80b3d6bd-ca52-4282-b4dd-9a277fb019ad
	  Boot ID:                    dcb5a4aa-84b0-4edf-aeb7-a96ccf6c0882
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-fj895                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-894836-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-q8tvb                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-894836-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-894836-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-59nqf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-894836-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-894836-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m20s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   RegisteredNode           13m                    node-controller  Node ha-894836-m02 event: Registered Node ha-894836-m02 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-894836-m02 event: Registered Node ha-894836-m02 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-894836-m02 event: Registered Node ha-894836-m02 in Controller
	  Normal   NodeHasSufficientPID     9m15s (x8 over 9m15s)  kubelet          Node ha-894836-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 9m15s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m15s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m15s (x8 over 9m15s)  kubelet          Node ha-894836-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m15s (x8 over 9m15s)  kubelet          Node ha-894836-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           8m43s                  node-controller  Node ha-894836-m02 event: Registered Node ha-894836-m02 in Controller
	  Normal   Starting                 8m9s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m9s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m9s (x8 over 8m9s)    kubelet          Node ha-894836-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m9s (x8 over 8m9s)    kubelet          Node ha-894836-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m9s (x8 over 8m9s)    kubelet          Node ha-894836-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m47s                  node-controller  Node ha-894836-m02 event: Registered Node ha-894836-m02 in Controller
	  Normal   RegisteredNode           7m32s                  node-controller  Node ha-894836-m02 event: Registered Node ha-894836-m02 in Controller
	
	
	Name:               ha-894836-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-894836-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=ha-894836
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_29T08_43_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 08:43:47 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-894836-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 08:46:51 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 29 Oct 2025 08:46:51 +0000   Wed, 29 Oct 2025 08:48:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 29 Oct 2025 08:46:51 +0000   Wed, 29 Oct 2025 08:48:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 29 Oct 2025 08:46:51 +0000   Wed, 29 Oct 2025 08:48:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 29 Oct 2025 08:46:51 +0000   Wed, 29 Oct 2025 08:48:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-894836-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                58f4cbd3-c3bd-48cc-83b9-9e65dbe3cfca
	  Boot ID:                    dcb5a4aa-84b0-4edf-aeb7-a96ccf6c0882
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-gmd49                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-894836-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-qkxpk                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-894836-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-894836-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-gd8g6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-894836-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-894836-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        11m    kube-proxy       
	  Normal  RegisteredNode  11m    node-controller  Node ha-894836-m03 event: Registered Node ha-894836-m03 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-894836-m03 event: Registered Node ha-894836-m03 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-894836-m03 event: Registered Node ha-894836-m03 in Controller
	  Normal  RegisteredNode  8m43s  node-controller  Node ha-894836-m03 event: Registered Node ha-894836-m03 in Controller
	  Normal  RegisteredNode  7m47s  node-controller  Node ha-894836-m03 event: Registered Node ha-894836-m03 in Controller
	  Normal  RegisteredNode  7m32s  node-controller  Node ha-894836-m03 event: Registered Node ha-894836-m03 in Controller
	  Normal  NodeNotReady    6m57s  node-controller  Node ha-894836-m03 status is now: NodeNotReady
	
	
	Name:               ha-894836-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-894836-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=ha-894836
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_29T08_45_06_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 08:45:06 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-894836-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 08:46:48 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 29 Oct 2025 08:45:49 +0000   Wed, 29 Oct 2025 08:48:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 29 Oct 2025 08:45:49 +0000   Wed, 29 Oct 2025 08:48:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 29 Oct 2025 08:45:49 +0000   Wed, 29 Oct 2025 08:48:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 29 Oct 2025 08:45:49 +0000   Wed, 29 Oct 2025 08:48:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-894836-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                a6b33f47-a46d-4ce9-9424-db5d023a3b7c
	  Boot ID:                    dcb5a4aa-84b0-4edf-aeb7-a96ccf6c0882
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-hg69g       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-proxy-bprsj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-894836-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           10m                node-controller  Node ha-894836-m04 event: Registered Node ha-894836-m04 in Controller
	  Normal   NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-894836-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-894836-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           10m                node-controller  Node ha-894836-m04 event: Registered Node ha-894836-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-894836-m04 event: Registered Node ha-894836-m04 in Controller
	  Normal   NodeReady                9m53s              kubelet          Node ha-894836-m04 status is now: NodeReady
	  Normal   RegisteredNode           8m43s              node-controller  Node ha-894836-m04 event: Registered Node ha-894836-m04 in Controller
	  Normal   RegisteredNode           7m47s              node-controller  Node ha-894836-m04 event: Registered Node ha-894836-m04 in Controller
	  Normal   RegisteredNode           7m32s              node-controller  Node ha-894836-m04 event: Registered Node ha-894836-m04 in Controller
	  Normal   NodeNotReady             6m57s              node-controller  Node ha-894836-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Oct29 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014848] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.520802] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035216] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.815569] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.730396] kauditd_printk_skb: 36 callbacks suppressed
	[Oct29 08:19] kauditd_printk_skb: 8 callbacks suppressed
	[Oct29 08:21] overlayfs: idmapped layers are currently not supported
	[  +0.080642] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct29 08:26] overlayfs: idmapped layers are currently not supported
	[Oct29 08:27] overlayfs: idmapped layers are currently not supported
	[Oct29 08:41] overlayfs: idmapped layers are currently not supported
	[Oct29 08:42] overlayfs: idmapped layers are currently not supported
	[Oct29 08:43] overlayfs: idmapped layers are currently not supported
	[Oct29 08:45] overlayfs: idmapped layers are currently not supported
	[Oct29 08:46] overlayfs: idmapped layers are currently not supported
	[Oct29 08:47] overlayfs: idmapped layers are currently not supported
	[  +4.220383] overlayfs: idmapped layers are currently not supported
	[Oct29 08:48] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c5012e77d5995d67461a19df092ba7b0617af55e88a4f413560ffb01b7c5dd86] <==
	{"level":"warn","ts":"2025-10-29T08:55:15.227058Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"b0fdec051931967a","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:17.778870Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"b0fdec051931967a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:17.778926Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"b0fdec051931967a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:20.227962Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"b0fdec051931967a","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:20.228043Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"b0fdec051931967a","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:21.780393Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"b0fdec051931967a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:21.780442Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"b0fdec051931967a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:25.228439Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"b0fdec051931967a","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:25.228466Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"b0fdec051931967a","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:25.781989Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"b0fdec051931967a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:25.782051Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"b0fdec051931967a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:29.782960Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"b0fdec051931967a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:29.783015Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"b0fdec051931967a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:30.229700Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"b0fdec051931967a","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:30.229714Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"b0fdec051931967a","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:33.784892Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"b0fdec051931967a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:33.784949Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"b0fdec051931967a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:35.230861Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"b0fdec051931967a","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:35.230867Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"b0fdec051931967a","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:37.786299Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"b0fdec051931967a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:37.786354Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"b0fdec051931967a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:40.231758Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"b0fdec051931967a","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:40.231798Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"b0fdec051931967a","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:41.787909Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"b0fdec051931967a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:41.787976Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"b0fdec051931967a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	
	
	==> kernel <==
	 08:55:42 up 38 min,  0 user,  load average: 0.52, 1.44, 1.37
	Linux ha-894836 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b59e1fb940c3f6ad37293176d85dd63473e5ac8494b7819987c7064627f6d94c] <==
	I1029 08:55:10.921593       1 main.go:324] Node ha-894836-m04 has CIDR [10.244.3.0/24] 
	I1029 08:55:20.916830       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:55:20.916866       1 main.go:301] handling current node
	I1029 08:55:20.916883       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1029 08:55:20.916889       1 main.go:324] Node ha-894836-m02 has CIDR [10.244.1.0/24] 
	I1029 08:55:20.917040       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1029 08:55:20.917053       1 main.go:324] Node ha-894836-m03 has CIDR [10.244.2.0/24] 
	I1029 08:55:20.917112       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1029 08:55:20.917123       1 main.go:324] Node ha-894836-m04 has CIDR [10.244.3.0/24] 
	I1029 08:55:30.916442       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:55:30.916483       1 main.go:301] handling current node
	I1029 08:55:30.916498       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1029 08:55:30.916503       1 main.go:324] Node ha-894836-m02 has CIDR [10.244.1.0/24] 
	I1029 08:55:30.916677       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1029 08:55:30.916691       1 main.go:324] Node ha-894836-m03 has CIDR [10.244.2.0/24] 
	I1029 08:55:30.916824       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1029 08:55:30.916838       1 main.go:324] Node ha-894836-m04 has CIDR [10.244.3.0/24] 
	I1029 08:55:40.923925       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1029 08:55:40.924058       1 main.go:324] Node ha-894836-m04 has CIDR [10.244.3.0/24] 
	I1029 08:55:40.924212       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:55:40.924253       1 main.go:301] handling current node
	I1029 08:55:40.924305       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1029 08:55:40.924368       1 main.go:324] Node ha-894836-m02 has CIDR [10.244.1.0/24] 
	I1029 08:55:40.924514       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1029 08:55:40.924552       1 main.go:324] Node ha-894836-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [e00d3f78d68d909f0332f199fdaf28199771c94a7e8d59cc954f4172c68c75fe] <==
	I1029 08:47:52.921966       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1029 08:47:52.919543       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1029 08:47:52.926729       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1029 08:47:52.926973       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1029 08:47:52.933488       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	W1029 08:47:52.938611       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1029 08:47:52.945598       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1029 08:47:52.946057       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1029 08:47:52.946083       1 policy_source.go:240] refreshing policies
	I1029 08:47:52.951298       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1029 08:47:52.951418       1 aggregator.go:171] initial CRD sync complete...
	I1029 08:47:52.951451       1 autoregister_controller.go:144] Starting autoregister controller
	I1029 08:47:52.951481       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1029 08:47:52.951508       1 cache.go:39] Caches are synced for autoregister controller
	I1029 08:47:52.977975       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 08:47:52.993034       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1029 08:47:53.040242       1 controller.go:667] quota admission added evaluator for: endpoints
	I1029 08:47:53.057186       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1029 08:47:53.065202       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1029 08:47:53.534383       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1029 08:47:53.979043       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1029 08:47:54.542753       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 08:47:59.474057       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1029 08:47:59.516146       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1029 08:47:59.654898       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [ffcbb54d6ce4436f5aec8bb9428ef3aa2b15fa9ee4079908fa14d7ee16acbc0c] <==
	I1029 08:47:55.875853       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1029 08:47:55.882183       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1029 08:47:55.882278       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1029 08:47:55.882348       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1029 08:47:55.883459       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1029 08:47:55.887670       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1029 08:47:55.890946       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1029 08:47:55.891049       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1029 08:47:55.892406       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1029 08:47:55.892502       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1029 08:47:55.892563       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-894836-m04"
	I1029 08:47:55.893176       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1029 08:47:55.894883       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1029 08:47:55.898545       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 08:47:55.898596       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1029 08:47:55.901253       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1029 08:47:55.901667       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1029 08:47:55.905025       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1029 08:47:55.905294       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1029 08:47:55.917000       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1029 08:48:42.390447       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-tqj79 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-tqj79\": the object has been modified; please apply your changes to the latest version and try again"
	I1029 08:48:42.392685       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"7aa7c40c-2de0-444b-84d5-38273baecd29", APIVersion:"v1", ResourceVersion:"311", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-tqj79 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-tqj79": the object has been modified; please apply your changes to the latest version and try again
	I1029 08:48:42.407658       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-tqj79 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-tqj79\": the object has been modified; please apply your changes to the latest version and try again"
	I1029 08:48:42.407815       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"7aa7c40c-2de0-444b-84d5-38273baecd29", APIVersion:"v1", ResourceVersion:"311", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-tqj79 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-tqj79": the object has been modified; please apply your changes to the latest version and try again
	I1029 08:53:56.021427       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-gmd49"
	
	
	==> kube-proxy [4ac7e4e48f2d67e6c26eb63b7aff7bf2e7c9e3065e9d277bfed197195815f419] <==
	I1029 08:48:00.832054       1 server_linux.go:53] "Using iptables proxy"
	I1029 08:48:01.014142       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 08:48:01.114385       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 08:48:01.114528       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1029 08:48:01.114683       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 08:48:01.305529       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 08:48:01.305578       1 server_linux.go:132] "Using iptables Proxier"
	I1029 08:48:01.412541       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 08:48:01.412931       1 server.go:527] "Version info" version="v1.34.1"
	I1029 08:48:01.413206       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 08:48:01.414509       1 config.go:200] "Starting service config controller"
	I1029 08:48:01.414592       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 08:48:01.414674       1 config.go:106] "Starting endpoint slice config controller"
	I1029 08:48:01.414708       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 08:48:01.414746       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 08:48:01.414771       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 08:48:01.437651       1 config.go:309] "Starting node config controller"
	I1029 08:48:01.437795       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 08:48:01.437892       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 08:48:01.521251       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 08:48:01.521390       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 08:48:01.521472       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [67e5abbb69757832239af83063ef76100de2cec956cd044965ac792572fce7d8] <==
	I1029 08:47:52.800319       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1029 08:47:52.800365       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 08:47:52.815921       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1029 08:47:52.816162       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 08:47:52.829112       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1029 08:47:52.834749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1029 08:47:52.816196       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1029 08:47:52.892990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1029 08:47:52.893149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1029 08:47:52.893207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1029 08:47:52.893255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1029 08:47:52.893310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1029 08:47:52.893364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1029 08:47:52.893406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1029 08:47:52.893454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1029 08:47:52.893501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1029 08:47:52.893542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1029 08:47:52.893586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1029 08:47:52.893632       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1029 08:47:52.893673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1029 08:47:52.893723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1029 08:47:52.893786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1029 08:47:52.893831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1029 08:47:52.893871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1029 08:47:52.934773       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 08:47:58 ha-894836 kubelet[799]: E1029 08:47:58.553127     799 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-894836\" already exists" pod="kube-system/etcd-ha-894836"
	Oct 29 08:47:58 ha-894836 kubelet[799]: I1029 08:47:58.553347     799 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ha-894836"
	Oct 29 08:47:58 ha-894836 kubelet[799]: E1029 08:47:58.581653     799 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-894836\" already exists" pod="kube-system/kube-apiserver-ha-894836"
	Oct 29 08:47:58 ha-894836 kubelet[799]: I1029 08:47:58.877519     799 apiserver.go:52] "Watching apiserver"
	Oct 29 08:47:58 ha-894836 kubelet[799]: I1029 08:47:58.897570     799 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-894836" podUID="3304e5b5-10a5-4362-855f-966f12e19513"
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.022914     799 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.027611     799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="984de04af66c2e9a91b240b1eee4ab93" path="/var/lib/kubelet/pods/984de04af66c2e9a91b240b1eee4ab93/volumes"
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.057848     799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/dea5c187-a5ec-4d74-90e1-bf9c51c5bb6f-cni-cfg\") pod \"kindnet-bjfp7\" (UID: \"dea5c187-a5ec-4d74-90e1-bf9c51c5bb6f\") " pod="kube-system/kindnet-bjfp7"
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.071318     799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dea5c187-a5ec-4d74-90e1-bf9c51c5bb6f-lib-modules\") pod \"kindnet-bjfp7\" (UID: \"dea5c187-a5ec-4d74-90e1-bf9c51c5bb6f\") " pod="kube-system/kindnet-bjfp7"
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.071556     799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0ef623f-f7ad-4b5a-8d1e-b08dc6d1ce80-lib-modules\") pod \"kube-proxy-gxrz7\" (UID: \"b0ef623f-f7ad-4b5a-8d1e-b08dc6d1ce80\") " pod="kube-system/kube-proxy-gxrz7"
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.071666     799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dea5c187-a5ec-4d74-90e1-bf9c51c5bb6f-xtables-lock\") pod \"kindnet-bjfp7\" (UID: \"dea5c187-a5ec-4d74-90e1-bf9c51c5bb6f\") " pod="kube-system/kindnet-bjfp7"
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.074936     799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0ef623f-f7ad-4b5a-8d1e-b08dc6d1ce80-xtables-lock\") pod \"kube-proxy-gxrz7\" (UID: \"b0ef623f-f7ad-4b5a-8d1e-b08dc6d1ce80\") " pod="kube-system/kube-proxy-gxrz7"
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.075062     799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/74a003fb-b5cc-4ffa-8560-fd41d1257bd6-tmp\") pod \"storage-provisioner\" (UID: \"74a003fb-b5cc-4ffa-8560-fd41d1257bd6\") " pod="kube-system/storage-provisioner"
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.085145     799 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-894836"
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.085320     799 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-894836"
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.188071     799 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 29 08:47:59 ha-894836 kubelet[799]: W1029 08:47:59.294117     799 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577/crio-fb9556b60baf7c523def4c79090adb05a1fc8173805d4bac0ef0573ad095f5af WatchSource:0}: Error finding container fb9556b60baf7c523def4c79090adb05a1fc8173805d4bac0ef0573ad095f5af: Status 404 returned error can't find the container with id fb9556b60baf7c523def4c79090adb05a1fc8173805d4bac0ef0573ad095f5af
	Oct 29 08:47:59 ha-894836 kubelet[799]: W1029 08:47:59.580481     799 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577/crio-662869c52a2c8133f956cfa328c8268d25d33d960ea2cf7acd20858704627dc0 WatchSource:0}: Error finding container 662869c52a2c8133f956cfa328c8268d25d33d960ea2cf7acd20858704627dc0: Status 404 returned error can't find the container with id 662869c52a2c8133f956cfa328c8268d25d33d960ea2cf7acd20858704627dc0
	Oct 29 08:47:59 ha-894836 kubelet[799]: W1029 08:47:59.659441     799 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577/crio-c6058cbca67d071839a960a649f1de901cec31652fc327f56667100a324eb7e5 WatchSource:0}: Error finding container c6058cbca67d071839a960a649f1de901cec31652fc327f56667100a324eb7e5: Status 404 returned error can't find the container with id c6058cbca67d071839a960a649f1de901cec31652fc327f56667100a324eb7e5
	Oct 29 08:47:59 ha-894836 kubelet[799]: W1029 08:47:59.686006     799 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577/crio-97069c7ad741e21a29e8b1c5b9e77d1159528e8e44e976bd587439e97920f6db WatchSource:0}: Error finding container 97069c7ad741e21a29e8b1c5b9e77d1159528e8e44e976bd587439e97920f6db: Status 404 returned error can't find the container with id 97069c7ad741e21a29e8b1c5b9e77d1159528e8e44e976bd587439e97920f6db
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.830139     799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-894836" podStartSLOduration=0.830111939 podStartE2EDuration="830.111939ms" podCreationTimestamp="2025-10-29 08:47:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 08:47:59.71182927 +0000 UTC m=+30.969423145" watchObservedRunningTime="2025-10-29 08:47:59.830111939 +0000 UTC m=+31.087705806"
	Oct 29 08:47:59 ha-894836 kubelet[799]: W1029 08:47:59.917098     799 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577/crio-541c10c0d9e9d889360a0c967d7f0004f27a9816efc8471371b080bd9c9e5b68 WatchSource:0}: Error finding container 541c10c0d9e9d889360a0c967d7f0004f27a9816efc8471371b080bd9c9e5b68: Status 404 returned error can't find the container with id 541c10c0d9e9d889360a0c967d7f0004f27a9816efc8471371b080bd9c9e5b68
	Oct 29 08:48:28 ha-894836 kubelet[799]: E1029 08:48:28.877765     799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00333e33883fd76a53b14a1f8680fa8d01d5e0e724d961b7eeaeb3a0a4a4ed6b\": container with ID starting with 00333e33883fd76a53b14a1f8680fa8d01d5e0e724d961b7eeaeb3a0a4a4ed6b not found: ID does not exist" containerID="00333e33883fd76a53b14a1f8680fa8d01d5e0e724d961b7eeaeb3a0a4a4ed6b"
	Oct 29 08:48:28 ha-894836 kubelet[799]: I1029 08:48:28.877826     799 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="00333e33883fd76a53b14a1f8680fa8d01d5e0e724d961b7eeaeb3a0a4a4ed6b" err="rpc error: code = NotFound desc = could not find container \"00333e33883fd76a53b14a1f8680fa8d01d5e0e724d961b7eeaeb3a0a4a4ed6b\": container with ID starting with 00333e33883fd76a53b14a1f8680fa8d01d5e0e724d961b7eeaeb3a0a4a4ed6b not found: ID does not exist"
	Oct 29 08:48:31 ha-894836 kubelet[799]: I1029 08:48:31.401594     799 scope.go:117] "RemoveContainer" containerID="69e1be8c137eda9847c41a23a137e76dd93f5a10225b59b8180411d6cb08e5d4"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-894836 -n ha-894836
helpers_test.go:269: (dbg) Run:  kubectl --context ha-894836 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-wpcg6
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-894836 describe pod busybox-7b57f96db7-wpcg6
helpers_test.go:290: (dbg) kubectl --context ha-894836 describe pod busybox-7b57f96db7-wpcg6:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-wpcg6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m9tsv (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-m9tsv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  108s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  108s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (529.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-894836 node delete m03 --alsologtostderr -v 5: (5.908788442s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-894836 status --alsologtostderr -v 5: exit status 7 (636.959619ms)

                                                
                                                
-- stdout --
	ha-894836
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-894836-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-894836-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:55:50.084612   57764 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:55:50.084837   57764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:55:50.084866   57764 out.go:374] Setting ErrFile to fd 2...
	I1029 08:55:50.084884   57764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:55:50.085210   57764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 08:55:50.085450   57764 out.go:368] Setting JSON to false
	I1029 08:55:50.085513   57764 mustload.go:66] Loading cluster: ha-894836
	I1029 08:55:50.085572   57764 notify.go:221] Checking for updates...
	I1029 08:55:50.086074   57764 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:55:50.086109   57764 status.go:174] checking status of ha-894836 ...
	I1029 08:55:50.086714   57764 cli_runner.go:164] Run: docker container inspect ha-894836 --format={{.State.Status}}
	I1029 08:55:50.137400   57764 status.go:371] ha-894836 host status = "Running" (err=<nil>)
	I1029 08:55:50.137420   57764 host.go:66] Checking if "ha-894836" exists ...
	I1029 08:55:50.137686   57764 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836
	I1029 08:55:50.160649   57764 host.go:66] Checking if "ha-894836" exists ...
	I1029 08:55:50.160934   57764 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 08:55:50.160981   57764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:55:50.181594   57764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836/id_rsa Username:docker}
	I1029 08:55:50.286125   57764 ssh_runner.go:195] Run: systemctl --version
	I1029 08:55:50.292805   57764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 08:55:50.306660   57764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:55:50.377913   57764 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-29 08:55:50.35813177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 08:55:50.378447   57764 kubeconfig.go:125] found "ha-894836" server: "https://192.168.49.254:8443"
	I1029 08:55:50.378485   57764 api_server.go:166] Checking apiserver status ...
	I1029 08:55:50.378530   57764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:55:50.391480   57764 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/913/cgroup
	I1029 08:55:50.400623   57764 api_server.go:182] apiserver freezer: "11:freezer:/docker/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577/crio/crio-e00d3f78d68d909f0332f199fdaf28199771c94a7e8d59cc954f4172c68c75fe"
	I1029 08:55:50.400702   57764 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577/crio/crio-e00d3f78d68d909f0332f199fdaf28199771c94a7e8d59cc954f4172c68c75fe/freezer.state
	I1029 08:55:50.408892   57764 api_server.go:204] freezer state: "THAWED"
	I1029 08:55:50.408920   57764 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1029 08:55:50.417345   57764 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1029 08:55:50.417373   57764 status.go:463] ha-894836 apiserver status = Running (err=<nil>)
	I1029 08:55:50.417383   57764 status.go:176] ha-894836 status: &{Name:ha-894836 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1029 08:55:50.417400   57764 status.go:174] checking status of ha-894836-m02 ...
	I1029 08:55:50.417709   57764 cli_runner.go:164] Run: docker container inspect ha-894836-m02 --format={{.State.Status}}
	I1029 08:55:50.434958   57764 status.go:371] ha-894836-m02 host status = "Running" (err=<nil>)
	I1029 08:55:50.434982   57764 host.go:66] Checking if "ha-894836-m02" exists ...
	I1029 08:55:50.435289   57764 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836-m02
	I1029 08:55:50.457821   57764 host.go:66] Checking if "ha-894836-m02" exists ...
	I1029 08:55:50.458116   57764 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 08:55:50.458164   57764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:55:50.476957   57764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m02/id_rsa Username:docker}
	I1029 08:55:50.586024   57764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 08:55:50.599190   57764 kubeconfig.go:125] found "ha-894836" server: "https://192.168.49.254:8443"
	I1029 08:55:50.599220   57764 api_server.go:166] Checking apiserver status ...
	I1029 08:55:50.599270   57764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:55:50.611146   57764 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup
	I1029 08:55:50.620526   57764 api_server.go:182] apiserver freezer: "11:freezer:/docker/7b17e0bda1750ae8f90051838c96c8f2ec707084cdcf7c7efc4aa96c78a29289/crio/crio-1bffe58587218b46c8183346249f23d200e2c850e533556ebddbb86049b7605e"
	I1029 08:55:50.620696   57764 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7b17e0bda1750ae8f90051838c96c8f2ec707084cdcf7c7efc4aa96c78a29289/crio/crio-1bffe58587218b46c8183346249f23d200e2c850e533556ebddbb86049b7605e/freezer.state
	I1029 08:55:50.628589   57764 api_server.go:204] freezer state: "THAWED"
	I1029 08:55:50.628620   57764 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1029 08:55:50.636909   57764 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1029 08:55:50.636945   57764 status.go:463] ha-894836-m02 apiserver status = Running (err=<nil>)
	I1029 08:55:50.636954   57764 status.go:176] ha-894836-m02 status: &{Name:ha-894836-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1029 08:55:50.636974   57764 status.go:174] checking status of ha-894836-m04 ...
	I1029 08:55:50.637288   57764 cli_runner.go:164] Run: docker container inspect ha-894836-m04 --format={{.State.Status}}
	I1029 08:55:50.655529   57764 status.go:371] ha-894836-m04 host status = "Stopped" (err=<nil>)
	I1029 08:55:50.655554   57764 status.go:384] host is not running, skipping remaining checks
	I1029 08:55:50.655561   57764 status.go:176] ha-894836-m04 status: &{Name:ha-894836-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-arm64 -p ha-894836 status --alsologtostderr -v 5" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-894836
helpers_test.go:243: (dbg) docker inspect ha-894836:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577",
	        "Created": "2025-10-29T08:41:13.884631643Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 51767,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T08:47:21.800876334Z",
	            "FinishedAt": "2025-10-29T08:47:21.16806896Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577/hostname",
	        "HostsPath": "/var/lib/docker/containers/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577/hosts",
	        "LogPath": "/var/lib/docker/containers/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577-json.log",
	        "Name": "/ha-894836",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-894836:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-894836",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577",
	                "LowerDir": "/var/lib/docker/overlay2/6cb7d98797bde16eca0f4bec3498bd7eec3437fba9aba27a2de6d3809021a168-init/diff:/var/lib/docker/overlay2/512c003c31c6c889d7180677d0a7d04f9641d651bb7c0f829b95ca7e47c2836c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6cb7d98797bde16eca0f4bec3498bd7eec3437fba9aba27a2de6d3809021a168/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6cb7d98797bde16eca0f4bec3498bd7eec3437fba9aba27a2de6d3809021a168/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6cb7d98797bde16eca0f4bec3498bd7eec3437fba9aba27a2de6d3809021a168/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-894836",
	                "Source": "/var/lib/docker/volumes/ha-894836/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-894836",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-894836",
	                "name.minikube.sigs.k8s.io": "ha-894836",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f6e74e15151ebcdec78f0c531e590064d6bb05fc075b51560c345f672aa3c577",
	            "SandboxKey": "/var/run/docker/netns/f6e74e15151e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32808"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32809"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32812"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32810"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32811"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-894836": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:33:dd:d4:71:59",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0687088684ea4c5a5709e0ca87c1a9ca99a57d381b08036eb4f13d9a4d606eb4",
	                    "EndpointID": "8936c5bd5e09c1315f13d32a72ef61578012dcc563588dd57720a11fcdb4992e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-894836",
	                        "40404985106a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-894836 -n ha-894836
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-894836 logs -n 25: (1.282998989s)
helpers_test.go:260: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-894836 ssh -n ha-894836-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836-m02 sudo cat /home/docker/cp-test_ha-894836-m03_ha-894836-m02.txt                                         │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ cp      │ ha-894836 cp ha-894836-m03:/home/docker/cp-test.txt ha-894836-m04:/home/docker/cp-test_ha-894836-m03_ha-894836-m04.txt               │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836-m04 sudo cat /home/docker/cp-test_ha-894836-m03_ha-894836-m04.txt                                         │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ cp      │ ha-894836 cp testdata/cp-test.txt ha-894836-m04:/home/docker/cp-test.txt                                                             │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ cp      │ ha-894836 cp ha-894836-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1145660143/001/cp-test_ha-894836-m04.txt │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ cp      │ ha-894836 cp ha-894836-m04:/home/docker/cp-test.txt ha-894836:/home/docker/cp-test_ha-894836-m04_ha-894836.txt                       │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836 sudo cat /home/docker/cp-test_ha-894836-m04_ha-894836.txt                                                 │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ cp      │ ha-894836 cp ha-894836-m04:/home/docker/cp-test.txt ha-894836-m02:/home/docker/cp-test_ha-894836-m04_ha-894836-m02.txt               │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836-m02 sudo cat /home/docker/cp-test_ha-894836-m04_ha-894836-m02.txt                                         │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ cp      │ ha-894836 cp ha-894836-m04:/home/docker/cp-test.txt ha-894836-m03:/home/docker/cp-test_ha-894836-m04_ha-894836-m03.txt               │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836-m03 sudo cat /home/docker/cp-test_ha-894836-m04_ha-894836-m03.txt                                         │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ node    │ ha-894836 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ node    │ ha-894836 node start m02 --alsologtostderr -v 5                                                                                      │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ node    │ ha-894836 node list --alsologtostderr -v 5                                                                                           │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │                     │
	│ stop    │ ha-894836 stop --alsologtostderr -v 5                                                                                                │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:47 UTC │
	│ start   │ ha-894836 start --wait true --alsologtostderr -v 5                                                                                   │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:47 UTC │                     │
	│ node    │ ha-894836 node list --alsologtostderr -v 5                                                                                           │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:55 UTC │                     │
	│ node    │ ha-894836 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:55 UTC │ 29 Oct 25 08:55 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 08:47:21
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 08:47:21.529499   51643 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:47:21.529606   51643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:47:21.529621   51643 out.go:374] Setting ErrFile to fd 2...
	I1029 08:47:21.529626   51643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:47:21.529872   51643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 08:47:21.530226   51643 out.go:368] Setting JSON to false
	I1029 08:47:21.531000   51643 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1793,"bootTime":1761725848,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1029 08:47:21.531062   51643 start.go:143] virtualization:  
	I1029 08:47:21.534496   51643 out.go:179] * [ha-894836] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1029 08:47:21.538440   51643 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 08:47:21.538583   51643 notify.go:221] Checking for updates...
	I1029 08:47:21.544526   51643 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 08:47:21.547326   51643 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 08:47:21.550152   51643 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	I1029 08:47:21.553042   51643 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1029 08:47:21.555854   51643 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 08:47:21.559195   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:47:21.559391   51643 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 08:47:21.590221   51643 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1029 08:47:21.590337   51643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:47:21.646530   51643 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-29 08:47:21.636887182 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 08:47:21.646636   51643 docker.go:319] overlay module found
	I1029 08:47:21.651571   51643 out.go:179] * Using the docker driver based on existing profile
	I1029 08:47:21.654406   51643 start.go:309] selected driver: docker
	I1029 08:47:21.654426   51643 start.go:930] validating driver "docker" against &{Name:ha-894836 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-894836 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 08:47:21.654576   51643 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 08:47:21.654673   51643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:47:21.713521   51643 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-29 08:47:21.703756989 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 08:47:21.713963   51643 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 08:47:21.713998   51643 cni.go:84] Creating CNI manager for ""
	I1029 08:47:21.714048   51643 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1029 08:47:21.714093   51643 start.go:353] cluster config:
	{Name:ha-894836 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-894836 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 08:47:21.719068   51643 out.go:179] * Starting "ha-894836" primary control-plane node in "ha-894836" cluster
	I1029 08:47:21.721819   51643 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 08:47:21.724835   51643 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 08:47:21.727599   51643 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:47:21.727626   51643 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 08:47:21.727647   51643 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1029 08:47:21.727666   51643 cache.go:59] Caching tarball of preloaded images
	I1029 08:47:21.727743   51643 preload.go:233] Found /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1029 08:47:21.727753   51643 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 08:47:21.727909   51643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/config.json ...
	I1029 08:47:21.745168   51643 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 08:47:21.745191   51643 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 08:47:21.745207   51643 cache.go:233] Successfully downloaded all kic artifacts
	I1029 08:47:21.745229   51643 start.go:360] acquireMachinesLock for ha-894836: {Name:mk81ec6bdb62bf512bc2903a97ef9ba531fecfa0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 08:47:21.745296   51643 start.go:364] duration metric: took 49.552µs to acquireMachinesLock for "ha-894836"
	I1029 08:47:21.745320   51643 start.go:96] Skipping create...Using existing machine configuration
	I1029 08:47:21.745329   51643 fix.go:54] fixHost starting: 
	I1029 08:47:21.745587   51643 cli_runner.go:164] Run: docker container inspect ha-894836 --format={{.State.Status}}
	I1029 08:47:21.762859   51643 fix.go:112] recreateIfNeeded on ha-894836: state=Stopped err=<nil>
	W1029 08:47:21.762919   51643 fix.go:138] unexpected machine state, will restart: <nil>
	I1029 08:47:21.766255   51643 out.go:252] * Restarting existing docker container for "ha-894836" ...
	I1029 08:47:21.766345   51643 cli_runner.go:164] Run: docker start ha-894836
	I1029 08:47:22.012669   51643 cli_runner.go:164] Run: docker container inspect ha-894836 --format={{.State.Status}}
	I1029 08:47:22.033117   51643 kic.go:430] container "ha-894836" state is running.
	I1029 08:47:22.033526   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836
	I1029 08:47:22.057333   51643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/config.json ...
	I1029 08:47:22.057589   51643 machine.go:94] provisionDockerMachine start ...
	I1029 08:47:22.057651   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:22.080561   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:22.080896   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1029 08:47:22.080906   51643 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 08:47:22.081644   51643 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1029 08:47:25.232635   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-894836
	
	I1029 08:47:25.232719   51643 ubuntu.go:182] provisioning hostname "ha-894836"
	I1029 08:47:25.232811   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:25.251060   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:25.251387   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1029 08:47:25.251404   51643 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-894836 && echo "ha-894836" | sudo tee /etc/hostname
	I1029 08:47:25.413694   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-894836
	
	I1029 08:47:25.413779   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:25.431658   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:25.431987   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1029 08:47:25.432010   51643 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-894836' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-894836/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-894836' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 08:47:25.580597   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 08:47:25.580622   51643 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-2763/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-2763/.minikube}
	I1029 08:47:25.580654   51643 ubuntu.go:190] setting up certificates
	I1029 08:47:25.580671   51643 provision.go:84] configureAuth start
	I1029 08:47:25.580734   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836
	I1029 08:47:25.598256   51643 provision.go:143] copyHostCerts
	I1029 08:47:25.598293   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 08:47:25.598330   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem, removing ...
	I1029 08:47:25.598336   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 08:47:25.598412   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem (1082 bytes)
	I1029 08:47:25.598503   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 08:47:25.598519   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem, removing ...
	I1029 08:47:25.598523   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 08:47:25.598549   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem (1123 bytes)
	I1029 08:47:25.598597   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 08:47:25.598618   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem, removing ...
	I1029 08:47:25.598622   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 08:47:25.598646   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem (1679 bytes)
	I1029 08:47:25.598700   51643 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem org=jenkins.ha-894836 san=[127.0.0.1 192.168.49.2 ha-894836 localhost minikube]
	I1029 08:47:26.140516   51643 provision.go:177] copyRemoteCerts
	I1029 08:47:26.140603   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 08:47:26.140697   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:26.157969   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836/id_rsa Username:docker}
	I1029 08:47:26.259769   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1029 08:47:26.259831   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 08:47:26.276774   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1029 08:47:26.276833   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 08:47:26.294325   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1029 08:47:26.294387   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1029 08:47:26.312588   51643 provision.go:87] duration metric: took 731.894787ms to configureAuth
	I1029 08:47:26.312652   51643 ubuntu.go:206] setting minikube options for container-runtime
	I1029 08:47:26.312914   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:47:26.313019   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:26.330542   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:26.330847   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1029 08:47:26.330868   51643 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 08:47:26.749842   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 08:47:26.749867   51643 machine.go:97] duration metric: took 4.692267534s to provisionDockerMachine
	I1029 08:47:26.749878   51643 start.go:293] postStartSetup for "ha-894836" (driver="docker")
	I1029 08:47:26.749923   51643 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 08:47:26.750004   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 08:47:26.750092   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:26.771117   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836/id_rsa Username:docker}
	I1029 08:47:26.878934   51643 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 08:47:26.882605   51643 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 08:47:26.882634   51643 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 08:47:26.882646   51643 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/addons for local assets ...
	I1029 08:47:26.882718   51643 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/files for local assets ...
	I1029 08:47:26.882831   51643 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> 45502.pem in /etc/ssl/certs
	I1029 08:47:26.882843   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> /etc/ssl/certs/45502.pem
	I1029 08:47:26.882991   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 08:47:26.891148   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /etc/ssl/certs/45502.pem (1708 bytes)
	I1029 08:47:26.909280   51643 start.go:296] duration metric: took 159.355379ms for postStartSetup
	I1029 08:47:26.909405   51643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 08:47:26.909466   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:26.925846   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836/id_rsa Username:docker}
	I1029 08:47:27.025507   51643 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 08:47:27.030364   51643 fix.go:56] duration metric: took 5.285027579s for fixHost
	I1029 08:47:27.030393   51643 start.go:83] releasing machines lock for "ha-894836", held for 5.285083572s
	I1029 08:47:27.030473   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836
	I1029 08:47:27.046867   51643 ssh_runner.go:195] Run: cat /version.json
	I1029 08:47:27.046908   51643 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 08:47:27.046925   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:27.046972   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:27.072712   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836/id_rsa Username:docker}
	I1029 08:47:27.075970   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836/id_rsa Username:docker}
	I1029 08:47:27.176083   51643 ssh_runner.go:195] Run: systemctl --version
	I1029 08:47:27.271259   51643 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 08:47:27.306996   51643 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 08:47:27.311297   51643 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 08:47:27.311362   51643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 08:47:27.318983   51643 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1029 08:47:27.319008   51643 start.go:496] detecting cgroup driver to use...
	I1029 08:47:27.319038   51643 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1029 08:47:27.319083   51643 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 08:47:27.334445   51643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 08:47:27.347545   51643 docker.go:218] disabling cri-docker service (if available) ...
	I1029 08:47:27.347636   51643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 08:47:27.363332   51643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 08:47:27.376173   51643 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 08:47:27.492370   51643 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 08:47:27.612596   51643 docker.go:234] disabling docker service ...
	I1029 08:47:27.612724   51643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 08:47:27.628742   51643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 08:47:27.643114   51643 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 08:47:27.769923   51643 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 08:47:27.894105   51643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 08:47:27.906720   51643 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 08:47:27.921611   51643 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 08:47:27.921734   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:27.930389   51643 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1029 08:47:27.930505   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:27.939285   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:27.947870   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:27.956623   51643 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 08:47:27.965519   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:27.974392   51643 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:27.982657   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:27.991382   51643 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 08:47:27.999251   51643 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 08:47:28.008477   51643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:47:28.138673   51643 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 08:47:28.265137   51643 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 08:47:28.265257   51643 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 08:47:28.269363   51643 start.go:564] Will wait 60s for crictl version
	I1029 08:47:28.269468   51643 ssh_runner.go:195] Run: which crictl
	I1029 08:47:28.273391   51643 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 08:47:28.298305   51643 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 08:47:28.298482   51643 ssh_runner.go:195] Run: crio --version
	I1029 08:47:28.332193   51643 ssh_runner.go:195] Run: crio --version
	I1029 08:47:28.363359   51643 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 08:47:28.366252   51643 cli_runner.go:164] Run: docker network inspect ha-894836 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 08:47:28.382546   51643 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1029 08:47:28.386569   51643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:47:28.396854   51643 kubeadm.go:884] updating cluster {Name:ha-894836 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-894836 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 08:47:28.397006   51643 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:47:28.397068   51643 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 08:47:28.434678   51643 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 08:47:28.434703   51643 crio.go:433] Images already preloaded, skipping extraction
	I1029 08:47:28.434770   51643 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 08:47:28.460074   51643 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 08:47:28.460096   51643 cache_images.go:86] Images are preloaded, skipping loading
	I1029 08:47:28.460105   51643 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1029 08:47:28.460221   51643 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-894836 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-894836 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 08:47:28.460331   51643 ssh_runner.go:195] Run: crio config
	I1029 08:47:28.513402   51643 cni.go:84] Creating CNI manager for ""
	I1029 08:47:28.513423   51643 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1029 08:47:28.513438   51643 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1029 08:47:28.513462   51643 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-894836 NodeName:ha-894836 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 08:47:28.513598   51643 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-894836"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 08:47:28.513621   51643 kube-vip.go:115] generating kube-vip config ...
	I1029 08:47:28.513670   51643 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1029 08:47:28.525412   51643 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1029 08:47:28.525541   51643 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1029 08:47:28.525629   51643 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 08:47:28.533537   51643 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 08:47:28.533649   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1029 08:47:28.541256   51643 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1029 08:47:28.554128   51643 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 08:47:28.567304   51643 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1029 08:47:28.580046   51643 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1029 08:47:28.592794   51643 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1029 08:47:28.596388   51643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:47:28.605938   51643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:47:28.721205   51643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 08:47:28.736487   51643 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836 for IP: 192.168.49.2
	I1029 08:47:28.736507   51643 certs.go:195] generating shared ca certs ...
	I1029 08:47:28.736536   51643 certs.go:227] acquiring lock for ca certs: {Name:mk715611293338c39436fe9072c5ce0c2fb993a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:47:28.736703   51643 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key
	I1029 08:47:28.736755   51643 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key
	I1029 08:47:28.736768   51643 certs.go:257] generating profile certs ...
	I1029 08:47:28.736855   51643 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.key
	I1029 08:47:28.736885   51643 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key.9555b31c
	I1029 08:47:28.736902   51643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt.9555b31c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1029 08:47:29.326544   51643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt.9555b31c ...
	I1029 08:47:29.326575   51643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt.9555b31c: {Name:mk2c66c1b3a93815ffa793a9ebfc638bd973efe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:47:29.326766   51643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key.9555b31c ...
	I1029 08:47:29.326783   51643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key.9555b31c: {Name:mk64676774836dc306d0667653f14bbfbbb06e3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:47:29.326872   51643 certs.go:382] copying /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt.9555b31c -> /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt
	I1029 08:47:29.327021   51643 certs.go:386] copying /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key.9555b31c -> /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key
	I1029 08:47:29.327155   51643 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key
	I1029 08:47:29.327173   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1029 08:47:29.327190   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1029 08:47:29.327208   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1029 08:47:29.327227   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1029 08:47:29.327243   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1029 08:47:29.327257   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1029 08:47:29.327275   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1029 08:47:29.327286   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1029 08:47:29.327336   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem (1338 bytes)
	W1029 08:47:29.327368   51643 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550_empty.pem, impossibly tiny 0 bytes
	I1029 08:47:29.327380   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem (1679 bytes)
	I1029 08:47:29.327404   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem (1082 bytes)
	I1029 08:47:29.327429   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem (1123 bytes)
	I1029 08:47:29.327455   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem (1679 bytes)
	I1029 08:47:29.327499   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem (1708 bytes)
	I1029 08:47:29.327529   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem -> /usr/share/ca-certificates/4550.pem
	I1029 08:47:29.327546   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> /usr/share/ca-certificates/45502.pem
	I1029 08:47:29.327560   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:47:29.328197   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 08:47:29.346024   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 08:47:29.368215   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 08:47:29.401494   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1029 08:47:29.429372   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1029 08:47:29.456963   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 08:47:29.488058   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 08:47:29.518940   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1029 08:47:29.566867   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem --> /usr/share/ca-certificates/4550.pem (1338 bytes)
	I1029 08:47:29.611519   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /usr/share/ca-certificates/45502.pem (1708 bytes)
	I1029 08:47:29.660809   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 08:47:29.699081   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 08:47:29.722213   51643 ssh_runner.go:195] Run: openssl version
	I1029 08:47:29.732266   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4550.pem && ln -fs /usr/share/ca-certificates/4550.pem /etc/ssl/certs/4550.pem"
	I1029 08:47:29.745012   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4550.pem
	I1029 08:47:29.751640   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:27 /usr/share/ca-certificates/4550.pem
	I1029 08:47:29.751710   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4550.pem
	I1029 08:47:29.814511   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4550.pem /etc/ssl/certs/51391683.0"
	I1029 08:47:29.826133   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/45502.pem && ln -fs /usr/share/ca-certificates/45502.pem /etc/ssl/certs/45502.pem"
	I1029 08:47:29.838154   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/45502.pem
	I1029 08:47:29.844165   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:27 /usr/share/ca-certificates/45502.pem
	I1029 08:47:29.844232   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/45502.pem
	I1029 08:47:29.905999   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/45502.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 08:47:29.913848   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 08:47:29.924235   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:47:29.932561   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:47:29.932629   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:47:29.989153   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 08:47:29.997241   51643 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 08:47:30.008565   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1029 08:47:30.100996   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1029 08:47:30.148023   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1029 08:47:30.205555   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1029 08:47:30.248683   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1029 08:47:30.291195   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1029 08:47:30.333318   51643 kubeadm.go:401] StartCluster: {Name:ha-894836 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-894836 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 08:47:30.333452   51643 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:47:30.333514   51643 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:47:30.363953   51643 cri.go:89] found id: "e00d3f78d68d909f0332f199fdaf28199771c94a7e8d59cc954f4172c68c75fe"
	I1029 08:47:30.363975   51643 cri.go:89] found id: "a917c056972ea87cbf263c90930d10cb54f7d7c4f044215f8091e6dc6ec698fe"
	I1029 08:47:30.363981   51643 cri.go:89] found id: "67e5abbb69757832239af83063ef76100de2cec956cd044965ac792572fce7d8"
	I1029 08:47:30.363984   51643 cri.go:89] found id: "ffcbb54d6ce4436f5aec8bb9428ef3aa2b15fa9ee4079908fa14d7ee16acbc0c"
	I1029 08:47:30.363987   51643 cri.go:89] found id: "c5012e77d5995d67461a19df092ba7b0617af55e88a4f413560ffb01b7c5dd86"
	I1029 08:47:30.363991   51643 cri.go:89] found id: ""
	I1029 08:47:30.364037   51643 ssh_runner.go:195] Run: sudo runc list -f json
	W1029 08:47:30.375323   51643 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:47:30Z" level=error msg="open /run/runc: no such file or directory"
	I1029 08:47:30.375401   51643 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 08:47:30.385470   51643 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1029 08:47:30.385492   51643 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1029 08:47:30.385554   51643 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1029 08:47:30.394291   51643 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1029 08:47:30.394701   51643 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-894836" does not appear in /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 08:47:30.394803   51643 kubeconfig.go:62] /home/jenkins/minikube-integration/21800-2763/kubeconfig needs updating (will repair): [kubeconfig missing "ha-894836" cluster setting kubeconfig missing "ha-894836" context setting]
	I1029 08:47:30.395074   51643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:47:30.395601   51643 kapi.go:59] client config for ha-894836: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.crt", KeyFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.key", CAFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1029 08:47:30.396079   51643 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1029 08:47:30.396100   51643 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1029 08:47:30.396107   51643 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1029 08:47:30.396112   51643 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1029 08:47:30.396116   51643 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1029 08:47:30.396600   51643 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1029 08:47:30.396732   51643 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1029 08:47:30.405937   51643 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1029 08:47:30.405963   51643 kubeadm.go:602] duration metric: took 20.455594ms to restartPrimaryControlPlane
	I1029 08:47:30.405973   51643 kubeadm.go:403] duration metric: took 72.664815ms to StartCluster
	I1029 08:47:30.405988   51643 settings.go:142] acquiring lock: {Name:mkeba1c0d8e3656c2a05b6b6f1f81184498df216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:47:30.406062   51643 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 08:47:30.406653   51643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:47:30.406844   51643 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 08:47:30.406872   51643 start.go:242] waiting for startup goroutines ...
	I1029 08:47:30.406887   51643 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 08:47:30.407409   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:47:30.412586   51643 out.go:179] * Enabled addons: 
	I1029 08:47:30.415502   51643 addons.go:515] duration metric: took 8.615131ms for enable addons: enabled=[]
	I1029 08:47:30.415550   51643 start.go:247] waiting for cluster config update ...
	I1029 08:47:30.415564   51643 start.go:256] writing updated cluster config ...
	I1029 08:47:30.418838   51643 out.go:203] 
	I1029 08:47:30.421986   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:47:30.422163   51643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/config.json ...
	I1029 08:47:30.425622   51643 out.go:179] * Starting "ha-894836-m02" control-plane node in "ha-894836" cluster
	I1029 08:47:30.428500   51643 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 08:47:30.431446   51643 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 08:47:30.434321   51643 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:47:30.434374   51643 cache.go:59] Caching tarball of preloaded images
	I1029 08:47:30.434516   51643 preload.go:233] Found /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1029 08:47:30.434549   51643 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 08:47:30.434704   51643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/config.json ...
	I1029 08:47:30.434965   51643 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 08:47:30.469091   51643 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 08:47:30.469113   51643 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 08:47:30.469126   51643 cache.go:233] Successfully downloaded all kic artifacts
	I1029 08:47:30.469150   51643 start.go:360] acquireMachinesLock for ha-894836-m02: {Name:mkb930aec8192c14094c9c711c93e26847bf9202 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 08:47:30.469207   51643 start.go:364] duration metric: took 40.936µs to acquireMachinesLock for "ha-894836-m02"
	I1029 08:47:30.469228   51643 start.go:96] Skipping create...Using existing machine configuration
	I1029 08:47:30.469233   51643 fix.go:54] fixHost starting: m02
	I1029 08:47:30.469504   51643 cli_runner.go:164] Run: docker container inspect ha-894836-m02 --format={{.State.Status}}
	I1029 08:47:30.500880   51643 fix.go:112] recreateIfNeeded on ha-894836-m02: state=Stopped err=<nil>
	W1029 08:47:30.500905   51643 fix.go:138] unexpected machine state, will restart: <nil>
	I1029 08:47:30.506548   51643 out.go:252] * Restarting existing docker container for "ha-894836-m02" ...
	I1029 08:47:30.506637   51643 cli_runner.go:164] Run: docker start ha-894836-m02
	I1029 08:47:30.853634   51643 cli_runner.go:164] Run: docker container inspect ha-894836-m02 --format={{.State.Status}}
	I1029 08:47:30.880386   51643 kic.go:430] container "ha-894836-m02" state is running.
	I1029 08:47:30.880745   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836-m02
	I1029 08:47:30.905743   51643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/config.json ...
	I1029 08:47:30.905982   51643 machine.go:94] provisionDockerMachine start ...
	I1029 08:47:30.906048   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:30.933559   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:30.933904   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1029 08:47:30.933913   51643 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 08:47:30.934536   51643 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55068->127.0.0.1:32813: read: connection reset by peer
	I1029 08:47:34.203957   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-894836-m02
	
	I1029 08:47:34.204004   51643 ubuntu.go:182] provisioning hostname "ha-894836-m02"
	I1029 08:47:34.204076   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:34.234369   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:34.234685   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1029 08:47:34.234703   51643 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-894836-m02 && echo "ha-894836-m02" | sudo tee /etc/hostname
	I1029 08:47:34.542369   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-894836-m02
	
	I1029 08:47:34.542516   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:34.574456   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:34.574762   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1029 08:47:34.574779   51643 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-894836-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-894836-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-894836-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 08:47:34.827546   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 08:47:34.827578   51643 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-2763/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-2763/.minikube}
	I1029 08:47:34.827603   51643 ubuntu.go:190] setting up certificates
	I1029 08:47:34.827638   51643 provision.go:84] configureAuth start
	I1029 08:47:34.827714   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836-m02
	I1029 08:47:34.862097   51643 provision.go:143] copyHostCerts
	I1029 08:47:34.862139   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 08:47:34.862171   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem, removing ...
	I1029 08:47:34.862183   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 08:47:34.862258   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem (1082 bytes)
	I1029 08:47:34.862339   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 08:47:34.862362   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem, removing ...
	I1029 08:47:34.862367   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 08:47:34.862394   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem (1123 bytes)
	I1029 08:47:34.862440   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 08:47:34.862461   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem, removing ...
	I1029 08:47:34.862469   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 08:47:34.862496   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem (1679 bytes)
	I1029 08:47:34.862545   51643 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem org=jenkins.ha-894836-m02 san=[127.0.0.1 192.168.49.3 ha-894836-m02 localhost minikube]
	I1029 08:47:35.182658   51643 provision.go:177] copyRemoteCerts
	I1029 08:47:35.182745   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 08:47:35.182793   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:35.201881   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m02/id_rsa Username:docker}
	I1029 08:47:35.346712   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1029 08:47:35.346775   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 08:47:35.384129   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1029 08:47:35.384198   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1029 08:47:35.415588   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1029 08:47:35.415653   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 08:47:35.457021   51643 provision.go:87] duration metric: took 629.369458ms to configureAuth
	I1029 08:47:35.457058   51643 ubuntu.go:206] setting minikube options for container-runtime
	I1029 08:47:35.457378   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:47:35.457501   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:35.485978   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:35.486288   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1029 08:47:35.486309   51643 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 08:47:35.984048   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 08:47:35.984077   51643 machine.go:97] duration metric: took 5.078076838s to provisionDockerMachine
	I1029 08:47:35.984093   51643 start.go:293] postStartSetup for "ha-894836-m02" (driver="docker")
	I1029 08:47:35.984105   51643 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 08:47:35.984167   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 08:47:35.984212   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:36.009654   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m02/id_rsa Username:docker}
	I1029 08:47:36.121479   51643 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 08:47:36.125706   51643 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 08:47:36.125737   51643 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 08:47:36.125748   51643 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/addons for local assets ...
	I1029 08:47:36.125802   51643 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/files for local assets ...
	I1029 08:47:36.125883   51643 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> 45502.pem in /etc/ssl/certs
	I1029 08:47:36.125902   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> /etc/ssl/certs/45502.pem
	I1029 08:47:36.126006   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 08:47:36.133908   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /etc/ssl/certs/45502.pem (1708 bytes)
	I1029 08:47:36.152562   51643 start.go:296] duration metric: took 168.452944ms for postStartSetup
	I1029 08:47:36.152710   51643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 08:47:36.152752   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:36.170976   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m02/id_rsa Username:docker}
	I1029 08:47:36.276973   51643 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 08:47:36.287814   51643 fix.go:56] duration metric: took 5.818573756s for fixHost
	I1029 08:47:36.287841   51643 start.go:83] releasing machines lock for "ha-894836-m02", held for 5.818626179s
	I1029 08:47:36.287916   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836-m02
	I1029 08:47:36.328488   51643 out.go:179] * Found network options:
	I1029 08:47:36.331520   51643 out.go:179]   - NO_PROXY=192.168.49.2
	W1029 08:47:36.337513   51643 proxy.go:120] fail to check proxy env: Error ip not in block
	W1029 08:47:36.337573   51643 proxy.go:120] fail to check proxy env: Error ip not in block
	I1029 08:47:36.337636   51643 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 08:47:36.337690   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:36.337952   51643 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 08:47:36.338007   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:36.372705   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m02/id_rsa Username:docker}
	I1029 08:47:36.382161   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m02/id_rsa Username:docker}
	I1029 08:47:36.725650   51643 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 08:47:36.732748   51643 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 08:47:36.732831   51643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 08:47:36.748828   51643 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1029 08:47:36.748854   51643 start.go:496] detecting cgroup driver to use...
	I1029 08:47:36.748899   51643 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1029 08:47:36.748976   51643 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 08:47:36.774113   51643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 08:47:36.799926   51643 docker.go:218] disabling cri-docker service (if available) ...
	I1029 08:47:36.800009   51643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 08:47:36.821641   51643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 08:47:36.838818   51643 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 08:47:37.085073   51643 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 08:47:37.283501   51643 docker.go:234] disabling docker service ...
	I1029 08:47:37.283581   51643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 08:47:37.306704   51643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 08:47:37.329115   51643 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 08:47:37.528935   51643 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 08:47:37.724811   51643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 08:47:37.745385   51643 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 08:47:37.766616   51643 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 08:47:37.766687   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:37.777687   51643 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1029 08:47:37.777763   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:37.790547   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:37.805597   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:37.824888   51643 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 08:47:37.833592   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:37.847509   51643 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:37.857690   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:37.870682   51643 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 08:47:37.881416   51643 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 08:47:37.893784   51643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:47:38.130979   51643 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 08:47:38.346041   51643 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 08:47:38.346156   51643 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 08:47:38.350264   51643 start.go:564] Will wait 60s for crictl version
	I1029 08:47:38.350326   51643 ssh_runner.go:195] Run: which crictl
	I1029 08:47:38.353928   51643 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 08:47:38.381039   51643 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 08:47:38.381134   51643 ssh_runner.go:195] Run: crio --version
	I1029 08:47:38.409799   51643 ssh_runner.go:195] Run: crio --version
	I1029 08:47:38.443728   51643 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 08:47:38.446621   51643 out.go:179]   - env NO_PROXY=192.168.49.2
	I1029 08:47:38.449812   51643 cli_runner.go:164] Run: docker network inspect ha-894836 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 08:47:38.466711   51643 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1029 08:47:38.470765   51643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:47:38.480879   51643 mustload.go:66] Loading cluster: ha-894836
	I1029 08:47:38.481131   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:47:38.481434   51643 cli_runner.go:164] Run: docker container inspect ha-894836 --format={{.State.Status}}
	I1029 08:47:38.498248   51643 host.go:66] Checking if "ha-894836" exists ...
	I1029 08:47:38.498544   51643 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836 for IP: 192.168.49.3
	I1029 08:47:38.498558   51643 certs.go:195] generating shared ca certs ...
	I1029 08:47:38.498572   51643 certs.go:227] acquiring lock for ca certs: {Name:mk715611293338c39436fe9072c5ce0c2fb993a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:47:38.498695   51643 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key
	I1029 08:47:38.498747   51643 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key
	I1029 08:47:38.498755   51643 certs.go:257] generating profile certs ...
	I1029 08:47:38.498831   51643 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.key
	I1029 08:47:38.498903   51643 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key.d4a7ec17
	I1029 08:47:38.498943   51643 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key
	I1029 08:47:38.498962   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1029 08:47:38.498975   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1029 08:47:38.498991   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1029 08:47:38.499002   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1029 08:47:38.499012   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1029 08:47:38.499039   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1029 08:47:38.499054   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1029 08:47:38.499064   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1029 08:47:38.499118   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem (1338 bytes)
	W1029 08:47:38.499148   51643 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550_empty.pem, impossibly tiny 0 bytes
	I1029 08:47:38.499158   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem (1679 bytes)
	I1029 08:47:38.499189   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem (1082 bytes)
	I1029 08:47:38.499215   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem (1123 bytes)
	I1029 08:47:38.499239   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem (1679 bytes)
	I1029 08:47:38.499284   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem (1708 bytes)
	I1029 08:47:38.499315   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem -> /usr/share/ca-certificates/4550.pem
	I1029 08:47:38.499335   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> /usr/share/ca-certificates/45502.pem
	I1029 08:47:38.499349   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:47:38.499410   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:38.516805   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836/id_rsa Username:docker}
	I1029 08:47:38.612647   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1029 08:47:38.616561   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1029 08:47:38.624748   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1029 08:47:38.628258   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1029 08:47:38.637180   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1029 08:47:38.640891   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1029 08:47:38.650214   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1029 08:47:38.653972   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1029 08:47:38.662619   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1029 08:47:38.666317   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1029 08:47:38.674366   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1029 08:47:38.678199   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1029 08:47:38.686306   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 08:47:38.706856   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 08:47:38.724221   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 08:47:38.741317   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1029 08:47:38.759079   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1029 08:47:38.777104   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 08:47:38.794767   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 08:47:38.812149   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1029 08:47:38.830280   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem --> /usr/share/ca-certificates/4550.pem (1338 bytes)
	I1029 08:47:38.849527   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /usr/share/ca-certificates/45502.pem (1708 bytes)
	I1029 08:47:38.870347   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 08:47:38.890190   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1029 08:47:38.904271   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1029 08:47:38.917479   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1029 08:47:38.930520   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1029 08:47:38.945717   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1029 08:47:38.959276   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1029 08:47:38.972479   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1029 08:47:38.985067   51643 ssh_runner.go:195] Run: openssl version
	I1029 08:47:38.991454   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4550.pem && ln -fs /usr/share/ca-certificates/4550.pem /etc/ssl/certs/4550.pem"
	I1029 08:47:38.999996   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4550.pem
	I1029 08:47:39.004703   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:27 /usr/share/ca-certificates/4550.pem
	I1029 08:47:39.004780   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4550.pem
	I1029 08:47:39.050207   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4550.pem /etc/ssl/certs/51391683.0"
	I1029 08:47:39.058997   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/45502.pem && ln -fs /usr/share/ca-certificates/45502.pem /etc/ssl/certs/45502.pem"
	I1029 08:47:39.067821   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/45502.pem
	I1029 08:47:39.071762   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:27 /usr/share/ca-certificates/45502.pem
	I1029 08:47:39.071826   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/45502.pem
	I1029 08:47:39.113725   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/45502.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 08:47:39.121907   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 08:47:39.130312   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:47:39.134430   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:47:39.134513   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:47:39.176116   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 08:47:39.184143   51643 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 08:47:39.188071   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1029 08:47:39.229804   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1029 08:47:39.271125   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1029 08:47:39.314420   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1029 08:47:39.358357   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1029 08:47:39.404199   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1029 08:47:39.450657   51643 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1029 08:47:39.450775   51643 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-894836-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-894836 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 08:47:39.450808   51643 kube-vip.go:115] generating kube-vip config ...
	I1029 08:47:39.450861   51643 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1029 08:47:39.462795   51643 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1029 08:47:39.462879   51643 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1029 08:47:39.462977   51643 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 08:47:39.471222   51643 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 08:47:39.471296   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1029 08:47:39.480280   51643 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1029 08:47:39.493347   51643 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 08:47:39.506856   51643 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1029 08:47:39.521570   51643 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1029 08:47:39.525461   51643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:47:39.536266   51643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:47:39.680061   51643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 08:47:39.694883   51643 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 08:47:39.695320   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:47:39.699488   51643 out.go:179] * Verifying Kubernetes components...
	I1029 08:47:39.702679   51643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:47:39.837549   51643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 08:47:39.854606   51643 kapi.go:59] client config for ha-894836: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.crt", KeyFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.key", CAFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1029 08:47:39.854679   51643 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1029 08:47:39.854929   51643 node_ready.go:35] waiting up to 6m0s for node "ha-894836-m02" to be "Ready" ...
	W1029 08:47:49.857769   51643 node_ready.go:55] error getting node "ha-894836-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-894836-m02": net/http: TLS handshake timeout
	I1029 08:47:52.860254   51643 node_ready.go:49] node "ha-894836-m02" is "Ready"
	I1029 08:47:52.860290   51643 node_ready.go:38] duration metric: took 13.005340499s for node "ha-894836-m02" to be "Ready" ...
	I1029 08:47:52.860304   51643 api_server.go:52] waiting for apiserver process to appear ...
	I1029 08:47:52.860384   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:53.361211   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:53.860507   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:54.360916   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:54.860446   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:55.361159   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:55.860486   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:56.361306   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:56.860828   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:57.360541   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:57.860525   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:58.361238   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:58.374939   51643 api_server.go:72] duration metric: took 18.680010468s to wait for apiserver process to appear ...
	I1029 08:47:58.374971   51643 api_server.go:88] waiting for apiserver healthz status ...
	I1029 08:47:58.374992   51643 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1029 08:47:58.386476   51643 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1029 08:47:58.388170   51643 api_server.go:141] control plane version: v1.34.1
	I1029 08:47:58.388195   51643 api_server.go:131] duration metric: took 13.217297ms to wait for apiserver health ...
	I1029 08:47:58.388204   51643 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 08:47:58.397073   51643 system_pods.go:59] 26 kube-system pods found
	I1029 08:47:58.397155   51643 system_pods.go:61] "coredns-66bc5c9577-hhhxx" [e56e0269-e45a-43e3-a77e-177a0a756b40] Running
	I1029 08:47:58.397179   51643 system_pods.go:61] "coredns-66bc5c9577-vcp67" [f0f6bb79-544e-4586-aef9-3a82b1c78ecc] Running
	I1029 08:47:58.397217   51643 system_pods.go:61] "etcd-ha-894836" [5cd4d1f7-1dcb-4100-a31e-208ccc817ea3] Running
	I1029 08:47:58.397245   51643 system_pods.go:61] "etcd-ha-894836-m02" [2a90d177-9fd1-49e1-8c1e-79e3a1b5c413] Running
	I1029 08:47:58.397271   51643 system_pods.go:61] "etcd-ha-894836-m03" [6cd41576-e310-4635-9b94-f2d09bfe4222] Running
	I1029 08:47:58.397328   51643 system_pods.go:61] "kindnet-bjfp7" [dea5c187-a5ec-4d74-90e1-bf9c51c5bb6f] Running
	I1029 08:47:58.397356   51643 system_pods.go:61] "kindnet-hg69g" [8938d12e-502d-4a8c-84a5-018253ac53ba] Running
	I1029 08:47:58.397405   51643 system_pods.go:61] "kindnet-q8tvb" [1da0da6b-7d7f-45c0-9dab-afd839431062] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1029 08:47:58.397432   51643 system_pods.go:61] "kindnet-qkxpk" [a5470a24-fa80-424b-b421-001526b2593b] Running
	I1029 08:47:58.397457   51643 system_pods.go:61] "kube-apiserver-ha-894836" [b94cee38-e526-4d61-a186-f91144703115] Running
	I1029 08:47:58.397494   51643 system_pods.go:61] "kube-apiserver-ha-894836-m02" [c3caf692-d34f-4888-a75f-456b448a2676] Running
	I1029 08:47:58.397520   51643 system_pods.go:61] "kube-apiserver-ha-894836-m03" [8c8e2229-e880-40d7-824c-cb83b74bb8f5] Running
	I1029 08:47:58.397554   51643 system_pods.go:61] "kube-controller-manager-ha-894836" [310aa2d6-f3db-4980-bd00-c377cfdc9246] Running
	I1029 08:47:58.397597   51643 system_pods.go:61] "kube-controller-manager-ha-894836-m02" [d0f22e91-0e21-46b7-b40c-4b6837e3595f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 08:47:58.397620   51643 system_pods.go:61] "kube-controller-manager-ha-894836-m03" [455529ad-15de-4b00-b3f8-389c14c89a53] Running
	I1029 08:47:58.397668   51643 system_pods.go:61] "kube-proxy-59nqf" [849e97d0-893f-428e-9146-cd4ddf60b718] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1029 08:47:58.397697   51643 system_pods.go:61] "kube-proxy-bprsj" [927e6e10-9052-4c58-8eee-98a7e1c134dc] Running
	I1029 08:47:58.397724   51643 system_pods.go:61] "kube-proxy-gd8g6" [cbfb04f1-2bc7-4683-b99f-079f27c7b5e2] Running
	I1029 08:47:58.397756   51643 system_pods.go:61] "kube-proxy-gxrz7" [b0ef623f-f7ad-4b5a-8d1e-b08dc6d1ce80] Running
	I1029 08:47:58.397780   51643 system_pods.go:61] "kube-scheduler-ha-894836" [da7be70f-32ae-474c-a25a-a4e7a6e02653] Running
	I1029 08:47:58.397802   51643 system_pods.go:61] "kube-scheduler-ha-894836-m02" [cd22d36a-aab6-49ba-bbad-376526393820] Running
	I1029 08:47:58.397842   51643 system_pods.go:61] "kube-scheduler-ha-894836-m03" [5c88adc4-d9d3-42d1-aac9-550c356f755f] Running
	I1029 08:47:58.397867   51643 system_pods.go:61] "kube-vip-ha-894836" [3304e5b5-10a5-4362-855f-966f12e19513] Running
	I1029 08:47:58.397978   51643 system_pods.go:61] "kube-vip-ha-894836-m02" [79aaa612-a92e-4c41-a92a-c4bc904d64b2] Running
	I1029 08:47:58.398003   51643 system_pods.go:61] "kube-vip-ha-894836-m03" [1ce7bac8-8c0a-41fc-9cc9-db0417bd4da7] Running
	I1029 08:47:58.398030   51643 system_pods.go:61] "storage-provisioner" [74a003fb-b5cc-4ffa-8560-fd41d1257bd6] Running
	I1029 08:47:58.398069   51643 system_pods.go:74] duration metric: took 9.856974ms to wait for pod list to return data ...
	I1029 08:47:58.398098   51643 default_sa.go:34] waiting for default service account to be created ...
	I1029 08:47:58.402325   51643 default_sa.go:45] found service account: "default"
	I1029 08:47:58.402401   51643 default_sa.go:55] duration metric: took 4.283713ms for default service account to be created ...
	I1029 08:47:58.402426   51643 system_pods.go:116] waiting for k8s-apps to be running ...
	I1029 08:47:58.411486   51643 system_pods.go:86] 26 kube-system pods found
	I1029 08:47:58.411568   51643 system_pods.go:89] "coredns-66bc5c9577-hhhxx" [e56e0269-e45a-43e3-a77e-177a0a756b40] Running
	I1029 08:47:58.411592   51643 system_pods.go:89] "coredns-66bc5c9577-vcp67" [f0f6bb79-544e-4586-aef9-3a82b1c78ecc] Running
	I1029 08:47:58.411631   51643 system_pods.go:89] "etcd-ha-894836" [5cd4d1f7-1dcb-4100-a31e-208ccc817ea3] Running
	I1029 08:47:58.411661   51643 system_pods.go:89] "etcd-ha-894836-m02" [2a90d177-9fd1-49e1-8c1e-79e3a1b5c413] Running
	I1029 08:47:58.411686   51643 system_pods.go:89] "etcd-ha-894836-m03" [6cd41576-e310-4635-9b94-f2d09bfe4222] Running
	I1029 08:47:58.411725   51643 system_pods.go:89] "kindnet-bjfp7" [dea5c187-a5ec-4d74-90e1-bf9c51c5bb6f] Running
	I1029 08:47:58.411755   51643 system_pods.go:89] "kindnet-hg69g" [8938d12e-502d-4a8c-84a5-018253ac53ba] Running
	I1029 08:47:58.411785   51643 system_pods.go:89] "kindnet-q8tvb" [1da0da6b-7d7f-45c0-9dab-afd839431062] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1029 08:47:58.411826   51643 system_pods.go:89] "kindnet-qkxpk" [a5470a24-fa80-424b-b421-001526b2593b] Running
	I1029 08:47:58.411849   51643 system_pods.go:89] "kube-apiserver-ha-894836" [b94cee38-e526-4d61-a186-f91144703115] Running
	I1029 08:47:58.411887   51643 system_pods.go:89] "kube-apiserver-ha-894836-m02" [c3caf692-d34f-4888-a75f-456b448a2676] Running
	I1029 08:47:58.411913   51643 system_pods.go:89] "kube-apiserver-ha-894836-m03" [8c8e2229-e880-40d7-824c-cb83b74bb8f5] Running
	I1029 08:47:58.411942   51643 system_pods.go:89] "kube-controller-manager-ha-894836" [310aa2d6-f3db-4980-bd00-c377cfdc9246] Running
	I1029 08:47:58.411982   51643 system_pods.go:89] "kube-controller-manager-ha-894836-m02" [d0f22e91-0e21-46b7-b40c-4b6837e3595f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 08:47:58.412004   51643 system_pods.go:89] "kube-controller-manager-ha-894836-m03" [455529ad-15de-4b00-b3f8-389c14c89a53] Running
	I1029 08:47:58.412046   51643 system_pods.go:89] "kube-proxy-59nqf" [849e97d0-893f-428e-9146-cd4ddf60b718] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1029 08:47:58.412074   51643 system_pods.go:89] "kube-proxy-bprsj" [927e6e10-9052-4c58-8eee-98a7e1c134dc] Running
	I1029 08:47:58.412099   51643 system_pods.go:89] "kube-proxy-gd8g6" [cbfb04f1-2bc7-4683-b99f-079f27c7b5e2] Running
	I1029 08:47:58.412131   51643 system_pods.go:89] "kube-proxy-gxrz7" [b0ef623f-f7ad-4b5a-8d1e-b08dc6d1ce80] Running
	I1029 08:47:58.412157   51643 system_pods.go:89] "kube-scheduler-ha-894836" [da7be70f-32ae-474c-a25a-a4e7a6e02653] Running
	I1029 08:47:58.412180   51643 system_pods.go:89] "kube-scheduler-ha-894836-m02" [cd22d36a-aab6-49ba-bbad-376526393820] Running
	I1029 08:47:58.412217   51643 system_pods.go:89] "kube-scheduler-ha-894836-m03" [5c88adc4-d9d3-42d1-aac9-550c356f755f] Running
	I1029 08:47:58.412244   51643 system_pods.go:89] "kube-vip-ha-894836" [3304e5b5-10a5-4362-855f-966f12e19513] Running
	I1029 08:47:58.412269   51643 system_pods.go:89] "kube-vip-ha-894836-m02" [79aaa612-a92e-4c41-a92a-c4bc904d64b2] Running
	I1029 08:47:58.412360   51643 system_pods.go:89] "kube-vip-ha-894836-m03" [1ce7bac8-8c0a-41fc-9cc9-db0417bd4da7] Running
	I1029 08:47:58.412396   51643 system_pods.go:89] "storage-provisioner" [74a003fb-b5cc-4ffa-8560-fd41d1257bd6] Running
	I1029 08:47:58.412419   51643 system_pods.go:126] duration metric: took 9.970092ms to wait for k8s-apps to be running ...
	I1029 08:47:58.412443   51643 system_svc.go:44] waiting for kubelet service to be running ....
	I1029 08:47:58.412532   51643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 08:47:58.430648   51643 system_svc.go:56] duration metric: took 18.183914ms WaitForService to wait for kubelet
	I1029 08:47:58.430727   51643 kubeadm.go:587] duration metric: took 18.735792001s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 08:47:58.430763   51643 node_conditions.go:102] verifying NodePressure condition ...
	I1029 08:47:58.435505   51643 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1029 08:47:58.435585   51643 node_conditions.go:123] node cpu capacity is 2
	I1029 08:47:58.435615   51643 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1029 08:47:58.435636   51643 node_conditions.go:123] node cpu capacity is 2
	I1029 08:47:58.435667   51643 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1029 08:47:58.435691   51643 node_conditions.go:123] node cpu capacity is 2
	I1029 08:47:58.435709   51643 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1029 08:47:58.435750   51643 node_conditions.go:123] node cpu capacity is 2
	I1029 08:47:58.435776   51643 node_conditions.go:105] duration metric: took 4.978006ms to run NodePressure ...
	I1029 08:47:58.435804   51643 start.go:242] waiting for startup goroutines ...
	I1029 08:47:58.435853   51643 start.go:256] writing updated cluster config ...
	I1029 08:47:58.439739   51643 out.go:203] 
	I1029 08:47:58.443690   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:47:58.443882   51643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/config.json ...
	I1029 08:47:58.447597   51643 out.go:179] * Starting "ha-894836-m03" control-plane node in "ha-894836" cluster
	I1029 08:47:58.451296   51643 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 08:47:58.454468   51643 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 08:47:58.457455   51643 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:47:58.457578   51643 cache.go:59] Caching tarball of preloaded images
	I1029 08:47:58.457532   51643 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 08:47:58.457963   51643 preload.go:233] Found /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1029 08:47:58.457997   51643 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 08:47:58.458193   51643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/config.json ...
	I1029 08:47:58.484925   51643 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 08:47:58.484945   51643 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 08:47:58.484957   51643 cache.go:233] Successfully downloaded all kic artifacts
	I1029 08:47:58.484981   51643 start.go:360] acquireMachinesLock for ha-894836-m03: {Name:mkff6279e1eccd0127b32c0d6857db9b3fa3dac9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 08:47:58.485031   51643 start.go:364] duration metric: took 36.152µs to acquireMachinesLock for "ha-894836-m03"
	I1029 08:47:58.485050   51643 start.go:96] Skipping create...Using existing machine configuration
	I1029 08:47:58.485055   51643 fix.go:54] fixHost starting: m03
	I1029 08:47:58.485336   51643 cli_runner.go:164] Run: docker container inspect ha-894836-m03 --format={{.State.Status}}
	I1029 08:47:58.517723   51643 fix.go:112] recreateIfNeeded on ha-894836-m03: state=Stopped err=<nil>
	W1029 08:47:58.517747   51643 fix.go:138] unexpected machine state, will restart: <nil>
	I1029 08:47:58.521056   51643 out.go:252] * Restarting existing docker container for "ha-894836-m03" ...
	I1029 08:47:58.521146   51643 cli_runner.go:164] Run: docker start ha-894836-m03
	I1029 08:47:58.923330   51643 cli_runner.go:164] Run: docker container inspect ha-894836-m03 --format={{.State.Status}}
	I1029 08:47:58.955597   51643 kic.go:430] container "ha-894836-m03" state is running.
	I1029 08:47:58.955975   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836-m03
	I1029 08:47:58.985436   51643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/config.json ...
	I1029 08:47:58.985727   51643 machine.go:94] provisionDockerMachine start ...
	I1029 08:47:58.985800   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:47:59.021071   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:59.021382   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1029 08:47:59.021392   51643 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 08:47:59.022242   51643 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1029 08:48:02.369899   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-894836-m03
	
	I1029 08:48:02.369983   51643 ubuntu.go:182] provisioning hostname "ha-894836-m03"
	I1029 08:48:02.370089   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:48:02.396111   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:48:02.396431   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1029 08:48:02.396444   51643 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-894836-m03 && echo "ha-894836-m03" | sudo tee /etc/hostname
	I1029 08:48:02.706986   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-894836-m03
	
	I1029 08:48:02.707060   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:48:02.732902   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:48:02.733206   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1029 08:48:02.733231   51643 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-894836-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-894836-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-894836-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 08:48:03.018167   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 08:48:03.018188   51643 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-2763/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-2763/.minikube}
	I1029 08:48:03.018211   51643 ubuntu.go:190] setting up certificates
	I1029 08:48:03.018221   51643 provision.go:84] configureAuth start
	I1029 08:48:03.018284   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836-m03
	I1029 08:48:03.051408   51643 provision.go:143] copyHostCerts
	I1029 08:48:03.051450   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 08:48:03.051486   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem, removing ...
	I1029 08:48:03.051493   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 08:48:03.051568   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem (1082 bytes)
	I1029 08:48:03.051644   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 08:48:03.051661   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem, removing ...
	I1029 08:48:03.051666   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 08:48:03.051690   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem (1123 bytes)
	I1029 08:48:03.051728   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 08:48:03.051744   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem, removing ...
	I1029 08:48:03.051748   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 08:48:03.051770   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem (1679 bytes)
	I1029 08:48:03.051815   51643 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem org=jenkins.ha-894836-m03 san=[127.0.0.1 192.168.49.4 ha-894836-m03 localhost minikube]
	I1029 08:48:04.283916   51643 provision.go:177] copyRemoteCerts
	I1029 08:48:04.283985   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 08:48:04.284031   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:48:04.301428   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m03/id_rsa Username:docker}
	I1029 08:48:04.461287   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1029 08:48:04.461367   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 08:48:04.496816   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1029 08:48:04.496881   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1029 08:48:04.527177   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1029 08:48:04.527250   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 08:48:04.556555   51643 provision.go:87] duration metric: took 1.5383197s to configureAuth
	I1029 08:48:04.556585   51643 ubuntu.go:206] setting minikube options for container-runtime
	I1029 08:48:04.556817   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:48:04.556919   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:48:04.581700   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:48:04.581999   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1029 08:48:04.582018   51643 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 08:48:05.181543   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 08:48:05.181567   51643 machine.go:97] duration metric: took 6.195829937s to provisionDockerMachine
	I1029 08:48:05.181589   51643 start.go:293] postStartSetup for "ha-894836-m03" (driver="docker")
	I1029 08:48:05.181600   51643 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 08:48:05.181674   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 08:48:05.181722   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:48:05.207592   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m03/id_rsa Username:docker}
	I1029 08:48:05.322834   51643 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 08:48:05.327694   51643 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 08:48:05.327775   51643 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 08:48:05.327808   51643 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/addons for local assets ...
	I1029 08:48:05.327899   51643 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/files for local assets ...
	I1029 08:48:05.328050   51643 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> 45502.pem in /etc/ssl/certs
	I1029 08:48:05.328079   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> /etc/ssl/certs/45502.pem
	I1029 08:48:05.328256   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 08:48:05.343080   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /etc/ssl/certs/45502.pem (1708 bytes)
	I1029 08:48:05.371323   51643 start.go:296] duration metric: took 189.718932ms for postStartSetup
	I1029 08:48:05.371417   51643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 08:48:05.371455   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:48:05.397947   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m03/id_rsa Username:docker}
	I1029 08:48:05.541458   51643 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 08:48:05.561976   51643 fix.go:56] duration metric: took 7.076913817s for fixHost
	I1029 08:48:05.562004   51643 start.go:83] releasing machines lock for "ha-894836-m03", held for 7.076964665s
	I1029 08:48:05.562072   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836-m03
	I1029 08:48:05.600883   51643 out.go:179] * Found network options:
	I1029 08:48:05.604417   51643 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1029 08:48:05.607757   51643 proxy.go:120] fail to check proxy env: Error ip not in block
	W1029 08:48:05.607793   51643 proxy.go:120] fail to check proxy env: Error ip not in block
	W1029 08:48:05.607816   51643 proxy.go:120] fail to check proxy env: Error ip not in block
	W1029 08:48:05.607826   51643 proxy.go:120] fail to check proxy env: Error ip not in block
	I1029 08:48:05.607887   51643 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 08:48:05.607928   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:48:05.607983   51643 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 08:48:05.608041   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:48:05.654947   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m03/id_rsa Username:docker}
	I1029 08:48:05.658008   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m03/id_rsa Username:docker}
	I1029 08:48:06.130162   51643 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 08:48:06.143305   51643 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 08:48:06.143421   51643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 08:48:06.167460   51643 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1029 08:48:06.167489   51643 start.go:496] detecting cgroup driver to use...
	I1029 08:48:06.167523   51643 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1029 08:48:06.167572   51643 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 08:48:06.213970   51643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 08:48:06.251029   51643 docker.go:218] disabling cri-docker service (if available) ...
	I1029 08:48:06.251087   51643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 08:48:06.290080   51643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 08:48:06.327709   51643 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 08:48:06.726326   51643 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 08:48:07.139091   51643 docker.go:234] disabling docker service ...
	I1029 08:48:07.139182   51643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 08:48:07.178202   51643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 08:48:07.209433   51643 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 08:48:07.608392   51643 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 08:48:08.086947   51643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 08:48:08.121769   51643 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 08:48:08.184236   51643 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 08:48:08.184326   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:48:08.215828   51643 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1029 08:48:08.215914   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:48:08.238638   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:48:08.269033   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:48:08.295262   51643 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 08:48:08.331399   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:48:08.356819   51643 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:48:08.389668   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:48:08.403860   51643 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 08:48:08.423244   51643 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 08:48:08.437579   51643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:48:08.832580   51643 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 08:49:39.275381   51643 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.442758035s)
	I1029 08:49:39.275412   51643 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 08:49:39.275483   51643 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 08:49:39.279771   51643 start.go:564] Will wait 60s for crictl version
	I1029 08:49:39.279855   51643 ssh_runner.go:195] Run: which crictl
	I1029 08:49:39.284759   51643 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 08:49:39.334853   51643 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 08:49:39.334984   51643 ssh_runner.go:195] Run: crio --version
	I1029 08:49:39.371804   51643 ssh_runner.go:195] Run: crio --version
	I1029 08:49:39.405984   51643 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 08:49:39.412429   51643 out.go:179]   - env NO_PROXY=192.168.49.2
	I1029 08:49:39.415504   51643 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1029 08:49:39.418469   51643 cli_runner.go:164] Run: docker network inspect ha-894836 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 08:49:39.435673   51643 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1029 08:49:39.440794   51643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:49:39.451208   51643 mustload.go:66] Loading cluster: ha-894836
	I1029 08:49:39.451471   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:49:39.451781   51643 cli_runner.go:164] Run: docker container inspect ha-894836 --format={{.State.Status}}
	I1029 08:49:39.468915   51643 host.go:66] Checking if "ha-894836" exists ...
	I1029 08:49:39.469188   51643 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836 for IP: 192.168.49.4
	I1029 08:49:39.469202   51643 certs.go:195] generating shared ca certs ...
	I1029 08:49:39.469216   51643 certs.go:227] acquiring lock for ca certs: {Name:mk715611293338c39436fe9072c5ce0c2fb993a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:49:39.469334   51643 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key
	I1029 08:49:39.469401   51643 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key
	I1029 08:49:39.469413   51643 certs.go:257] generating profile certs ...
	I1029 08:49:39.469489   51643 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.key
	I1029 08:49:39.469559   51643 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key.761eb988
	I1029 08:49:39.469601   51643 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key
	I1029 08:49:39.469613   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1029 08:49:39.469625   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1029 08:49:39.469641   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1029 08:49:39.469654   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1029 08:49:39.469666   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1029 08:49:39.469679   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1029 08:49:39.469694   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1029 08:49:39.469705   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1029 08:49:39.469761   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem (1338 bytes)
	W1029 08:49:39.469793   51643 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550_empty.pem, impossibly tiny 0 bytes
	I1029 08:49:39.469805   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem (1679 bytes)
	I1029 08:49:39.469829   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem (1082 bytes)
	I1029 08:49:39.469858   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem (1123 bytes)
	I1029 08:49:39.469887   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem (1679 bytes)
	I1029 08:49:39.469934   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem (1708 bytes)
	I1029 08:49:39.469964   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> /usr/share/ca-certificates/45502.pem
	I1029 08:49:39.469983   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:49:39.469994   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem -> /usr/share/ca-certificates/4550.pem
	I1029 08:49:39.470057   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:49:39.488996   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836/id_rsa Username:docker}
	I1029 08:49:39.588688   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1029 08:49:39.592443   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1029 08:49:39.600773   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1029 08:49:39.604466   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1029 08:49:39.613528   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1029 08:49:39.617112   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1029 08:49:39.625577   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1029 08:49:39.629278   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1029 08:49:39.637493   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1029 08:49:39.641121   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1029 08:49:39.650070   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1029 08:49:39.653954   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1029 08:49:39.662931   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 08:49:39.685107   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 08:49:39.705459   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 08:49:39.724858   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1029 08:49:39.743556   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1029 08:49:39.762456   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 08:49:39.781042   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 08:49:39.803894   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1029 08:49:39.827899   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /usr/share/ca-certificates/45502.pem (1708 bytes)
	I1029 08:49:39.848693   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 08:49:39.875006   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem --> /usr/share/ca-certificates/4550.pem (1338 bytes)
	I1029 08:49:39.895980   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1029 08:49:39.909585   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1029 08:49:39.922536   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1029 08:49:39.935718   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1029 08:49:39.950308   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1029 08:49:39.965160   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1029 08:49:39.979271   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1029 08:49:39.992671   51643 ssh_runner.go:195] Run: openssl version
	I1029 08:49:39.999106   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/45502.pem && ln -fs /usr/share/ca-certificates/45502.pem /etc/ssl/certs/45502.pem"
	I1029 08:49:40.009754   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/45502.pem
	I1029 08:49:40.016736   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:27 /usr/share/ca-certificates/45502.pem
	I1029 08:49:40.016877   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/45502.pem
	I1029 08:49:40.067934   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/45502.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 08:49:40.077186   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 08:49:40.086864   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:49:40.091154   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:49:40.091257   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:49:40.134215   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 08:49:40.142049   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4550.pem && ln -fs /usr/share/ca-certificates/4550.pem /etc/ssl/certs/4550.pem"
	I1029 08:49:40.150815   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4550.pem
	I1029 08:49:40.154732   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:27 /usr/share/ca-certificates/4550.pem
	I1029 08:49:40.154796   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4550.pem
	I1029 08:49:40.196358   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4550.pem /etc/ssl/certs/51391683.0"
	I1029 08:49:40.204753   51643 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 08:49:40.208825   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1029 08:49:40.251130   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1029 08:49:40.293659   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1029 08:49:40.335303   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1029 08:49:40.378403   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1029 08:49:40.419111   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1029 08:49:40.459947   51643 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1029 08:49:40.460045   51643 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-894836-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-894836 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 08:49:40.460074   51643 kube-vip.go:115] generating kube-vip config ...
	I1029 08:49:40.460122   51643 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1029 08:49:40.472263   51643 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1029 08:49:40.472402   51643 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1029 08:49:40.472491   51643 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 08:49:40.482442   51643 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 08:49:40.482527   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1029 08:49:40.491244   51643 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1029 08:49:40.509334   51643 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 08:49:40.522741   51643 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1029 08:49:40.543511   51643 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1029 08:49:40.549027   51643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:49:40.559626   51643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:49:40.700906   51643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 08:49:40.716131   51643 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 08:49:40.716494   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:49:40.720440   51643 out.go:179] * Verifying Kubernetes components...
	I1029 08:49:40.723093   51643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:49:40.849270   51643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 08:49:40.870801   51643 kapi.go:59] client config for ha-894836: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.crt", KeyFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.key", CAFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1029 08:49:40.870875   51643 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1029 08:49:40.871137   51643 node_ready.go:35] waiting up to 6m0s for node "ha-894836-m03" to be "Ready" ...
	W1029 08:49:42.878542   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:49:45.376167   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:49:47.875546   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:49:49.879197   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:49:52.374859   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:49:54.874674   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:49:56.875642   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:49:59.385971   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:01.874925   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:04.375281   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:06.875417   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:08.877527   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:11.374735   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:13.374773   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:15.875423   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:18.374307   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:20.375009   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:22.875458   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:24.875734   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:27.374436   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:29.375591   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:31.875678   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:33.876408   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:36.375279   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:38.875405   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:40.875687   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:43.375139   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:45.376751   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:47.874681   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:50.375198   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:52.874746   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:54.875461   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:57.374875   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:59.375081   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:01.874956   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:03.875571   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:05.875856   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:07.875956   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:10.374910   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:12.375300   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:14.874455   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:16.874501   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:18.881741   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:21.374575   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:23.375182   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:25.875630   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:28.375397   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:30.376726   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:32.874952   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:35.375371   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:37.875672   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:40.374584   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:42.375166   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:44.375299   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:46.875496   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:48.876305   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:51.375111   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:53.375554   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:55.874828   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:58.374446   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:00.391777   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:02.875635   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:05.374696   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:07.875548   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:10.374764   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:12.375076   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:14.874580   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:16.875240   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:18.880605   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:21.375072   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:23.875108   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:26.375196   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:28.375284   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:30.875177   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:32.875570   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:35.374573   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:37.374747   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:39.375982   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:41.875595   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:44.377104   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:46.875402   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:48.877198   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:51.375357   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:53.874734   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:55.875011   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:57.875521   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:00.380590   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:02.876012   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:05.375714   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:07.875383   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:10.374415   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:12.376491   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:14.875713   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:17.375204   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:19.377537   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:21.877439   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:24.375155   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:26.874635   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:28.881623   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:31.374848   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:33.374930   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:35.875771   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:38.375835   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:40.875765   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:43.375167   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:45.874879   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:47.878546   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:50.375661   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:52.875435   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:55.375646   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:57.874489   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:59.875624   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:02.375174   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:04.874940   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:07.375497   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:09.875063   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:11.875223   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:13.875266   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:16.378660   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:18.883945   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:21.374606   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:23.376495   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:25.875496   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:28.375564   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:30.875734   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:33.375292   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:35.875496   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:38.375495   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:40.874844   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:42.874893   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:45.376206   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:47.875511   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:50.375400   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:52.875571   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:55.374747   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:57.374957   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:59.375343   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:01.876012   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:04.374336   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:06.374603   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:08.875609   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:11.375178   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:13.375447   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:15.376425   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:17.874841   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:20.375318   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:22.874543   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:25.375289   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:27.874901   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:30.374710   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:32.375028   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:34.375632   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:36.875017   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:38.877472   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	I1029 08:55:40.871415   51643 node_ready.go:38] duration metric: took 6m0.000252794s for node "ha-894836-m03" to be "Ready" ...
	I1029 08:55:40.874909   51643 out.go:203] 
	W1029 08:55:40.877827   51643 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1029 08:55:40.877849   51643 out.go:285] * 
	W1029 08:55:40.880012   51643 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:55:40.882934   51643 out.go:203] 
	
	
	==> CRI-O <==
	Oct 29 08:48:31 ha-894836 crio[665]: time="2025-10-29T08:48:31.405473293Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=32150040-13c5-4993-9d53-1d8c8b936dae name=/runtime.v1.ImageService/ImageStatus
	Oct 29 08:48:31 ha-894836 crio[665]: time="2025-10-29T08:48:31.406558171Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=f533fe3b-c6cb-4daf-8190-4ca198dc0664 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 08:48:31 ha-894836 crio[665]: time="2025-10-29T08:48:31.406654286Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 08:48:31 ha-894836 crio[665]: time="2025-10-29T08:48:31.411556435Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 08:48:31 ha-894836 crio[665]: time="2025-10-29T08:48:31.412037753Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5eff9b5708aaba3e35120e5c17dfcd8d88e7135226bba9538b85d1bdd299f814/merged/etc/passwd: no such file or directory"
	Oct 29 08:48:31 ha-894836 crio[665]: time="2025-10-29T08:48:31.41219942Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5eff9b5708aaba3e35120e5c17dfcd8d88e7135226bba9538b85d1bdd299f814/merged/etc/group: no such file or directory"
	Oct 29 08:48:31 ha-894836 crio[665]: time="2025-10-29T08:48:31.412764686Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 08:48:31 ha-894836 crio[665]: time="2025-10-29T08:48:31.435659025Z" level=info msg="Created container 3d37627bfbc5fda963a0c849ee3de0fd939c938a1ae880f8853db63e9ec5b57b: kube-system/storage-provisioner/storage-provisioner" id=f533fe3b-c6cb-4daf-8190-4ca198dc0664 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 08:48:31 ha-894836 crio[665]: time="2025-10-29T08:48:31.436586896Z" level=info msg="Starting container: 3d37627bfbc5fda963a0c849ee3de0fd939c938a1ae880f8853db63e9ec5b57b" id=d8955627-909b-475a-944e-ac1a3b5d4e96 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 08:48:31 ha-894836 crio[665]: time="2025-10-29T08:48:31.43975017Z" level=info msg="Started container" PID=1368 containerID=3d37627bfbc5fda963a0c849ee3de0fd939c938a1ae880f8853db63e9ec5b57b description=kube-system/storage-provisioner/storage-provisioner id=d8955627-909b-475a-944e-ac1a3b5d4e96 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c6058cbca67d071839a960a649f1de901cec31652fc327f56667100a324eb7e5
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.916829212Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.920294611Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.920371732Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.920393944Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.926141566Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.926179974Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.92620596Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.930512623Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.930548259Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.930572817Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.934035459Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.934075393Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.934102561Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.937441337Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.937480057Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	3d37627bfbc5f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Running             storage-provisioner       2                   c6058cbca67d0       storage-provisioner                 kube-system
	7e6beb43bb335       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   9aa14b66630e2       coredns-66bc5c9577-hhhxx            kube-system
	69e1be8c137ed       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Exited              storage-provisioner       1                   c6058cbca67d0       storage-provisioner                 kube-system
	e7956795c58f4       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   7 minutes ago       Running             busybox                   1                   541c10c0d9e9d       busybox-7b57f96db7-hl8ll            default
	4ac7e4e48f2d6       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 minutes ago       Running             kube-proxy                1                   97069c7ad741e       kube-proxy-gxrz7                    kube-system
	b59e1fb940c3f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 minutes ago       Running             kindnet-cni               1                   662869c52a2c8       kindnet-bjfp7                       kube-system
	f4d98e59447db       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   fb9556b60baf7       coredns-66bc5c9577-vcp67            kube-system
	e00d3f78d68d9       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   8 minutes ago       Running             kube-apiserver            1                   27c7e21f538bd       kube-apiserver-ha-894836            kube-system
	a917c056972ea       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   8 minutes ago       Running             kube-vip                  0                   cb582940fcc64       kube-vip-ha-894836                  kube-system
	67e5abbb69757       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   8 minutes ago       Running             kube-scheduler            1                   615eac85d59b6       kube-scheduler-ha-894836            kube-system
	ffcbb54d6ce44       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   8 minutes ago       Running             kube-controller-manager   1                   3a2ab0bee942f       kube-controller-manager-ha-894836   kube-system
	c5012e77d5995       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   8 minutes ago       Running             etcd                      1                   0d7cccc011f06       etcd-ha-894836                      kube-system
	
	
	==> coredns [7e6beb43bb33582fbfaddc581b0968352916d1ba99aca6791d37ebb24f48a116] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34440 - 7065 "HINFO IN 8445725135211176428.1755746847705524405. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013494166s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f4d98e59447db0183f40bf805b64d3d4db57ead54fe530999384509e544cc7d9] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42938 - 8700 "HINFO IN 4442209450395311171.7481964028264372801. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023094613s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-894836
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-894836
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=ha-894836
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T08_41_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 08:41:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-894836
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 08:55:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 08:55:23 +0000   Wed, 29 Oct 2025 08:41:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 08:55:23 +0000   Wed, 29 Oct 2025 08:41:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 08:55:23 +0000   Wed, 29 Oct 2025 08:41:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 08:55:23 +0000   Wed, 29 Oct 2025 08:42:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-894836
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                cd4b1ccd-742f-4f33-9ae4-c8bc3e629f16
	  Boot ID:                    dcb5a4aa-84b0-4edf-aeb7-a96ccf6c0882
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-hl8ll             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-hhhxx             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 coredns-66bc5c9577-vcp67             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-ha-894836                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-bjfp7                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-894836             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-894836    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-gxrz7                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-894836             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-894836                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m52s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m50s                  kube-proxy       
	  Normal   Starting                 14m                    kube-proxy       
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     14m (x8 over 14m)      kubelet          Node ha-894836 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-894836 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-894836 status is now: NodeHasSufficientMemory
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-894836 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-894836 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-894836 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           14m                    node-controller  Node ha-894836 event: Registered Node ha-894836 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-894836 event: Registered Node ha-894836 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-894836 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-894836 event: Registered Node ha-894836 in Controller
	  Normal   RegisteredNode           8m52s                  node-controller  Node ha-894836 event: Registered Node ha-894836 in Controller
	  Normal   Starting                 8m23s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m23s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m22s (x8 over 8m23s)  kubelet          Node ha-894836 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m22s (x8 over 8m23s)  kubelet          Node ha-894836 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m22s (x8 over 8m23s)  kubelet          Node ha-894836 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m56s                  node-controller  Node ha-894836 event: Registered Node ha-894836 in Controller
	  Normal   RegisteredNode           7m41s                  node-controller  Node ha-894836 event: Registered Node ha-894836 in Controller
	
	
	Name:               ha-894836-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-894836-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=ha-894836
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_29T08_42_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 08:42:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-894836-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 08:55:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 08:55:46 +0000   Wed, 29 Oct 2025 08:42:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 08:55:46 +0000   Wed, 29 Oct 2025 08:42:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 08:55:46 +0000   Wed, 29 Oct 2025 08:42:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 08:55:46 +0000   Wed, 29 Oct 2025 08:43:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-894836-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                80b3d6bd-ca52-4282-b4dd-9a277fb019ad
	  Boot ID:                    dcb5a4aa-84b0-4edf-aeb7-a96ccf6c0882
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-fj895                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-894836-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-q8tvb                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-894836-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-894836-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-59nqf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-894836-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-894836-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m29s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   RegisteredNode           13m                    node-controller  Node ha-894836-m02 event: Registered Node ha-894836-m02 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-894836-m02 event: Registered Node ha-894836-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-894836-m02 event: Registered Node ha-894836-m02 in Controller
	  Normal   NodeHasSufficientPID     9m24s (x8 over 9m24s)  kubelet          Node ha-894836-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 9m24s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m24s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m24s (x8 over 9m24s)  kubelet          Node ha-894836-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m24s (x8 over 9m24s)  kubelet          Node ha-894836-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           8m52s                  node-controller  Node ha-894836-m02 event: Registered Node ha-894836-m02 in Controller
	  Normal   Starting                 8m18s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m18s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m18s (x8 over 8m18s)  kubelet          Node ha-894836-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m18s (x8 over 8m18s)  kubelet          Node ha-894836-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m18s (x8 over 8m18s)  kubelet          Node ha-894836-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m56s                  node-controller  Node ha-894836-m02 event: Registered Node ha-894836-m02 in Controller
	  Normal   RegisteredNode           7m41s                  node-controller  Node ha-894836-m02 event: Registered Node ha-894836-m02 in Controller
	
	
	Name:               ha-894836-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-894836-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=ha-894836
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_29T08_45_06_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 08:45:06 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-894836-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 08:46:48 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 29 Oct 2025 08:45:49 +0000   Wed, 29 Oct 2025 08:48:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 29 Oct 2025 08:45:49 +0000   Wed, 29 Oct 2025 08:48:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 29 Oct 2025 08:45:49 +0000   Wed, 29 Oct 2025 08:48:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 29 Oct 2025 08:45:49 +0000   Wed, 29 Oct 2025 08:48:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-894836-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                a6b33f47-a46d-4ce9-9424-db5d023a3b7c
	  Boot ID:                    dcb5a4aa-84b0-4edf-aeb7-a96ccf6c0882
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-hg69g       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-proxy-bprsj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-894836-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           10m                node-controller  Node ha-894836-m04 event: Registered Node ha-894836-m04 in Controller
	  Normal   NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-894836-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-894836-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           10m                node-controller  Node ha-894836-m04 event: Registered Node ha-894836-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-894836-m04 event: Registered Node ha-894836-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-894836-m04 status is now: NodeReady
	  Normal   RegisteredNode           8m52s              node-controller  Node ha-894836-m04 event: Registered Node ha-894836-m04 in Controller
	  Normal   RegisteredNode           7m56s              node-controller  Node ha-894836-m04 event: Registered Node ha-894836-m04 in Controller
	  Normal   RegisteredNode           7m41s              node-controller  Node ha-894836-m04 event: Registered Node ha-894836-m04 in Controller
	  Normal   NodeNotReady             7m6s               node-controller  Node ha-894836-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Oct29 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014848] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.520802] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035216] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.815569] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.730396] kauditd_printk_skb: 36 callbacks suppressed
	[Oct29 08:19] kauditd_printk_skb: 8 callbacks suppressed
	[Oct29 08:21] overlayfs: idmapped layers are currently not supported
	[  +0.080642] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct29 08:26] overlayfs: idmapped layers are currently not supported
	[Oct29 08:27] overlayfs: idmapped layers are currently not supported
	[Oct29 08:41] overlayfs: idmapped layers are currently not supported
	[Oct29 08:42] overlayfs: idmapped layers are currently not supported
	[Oct29 08:43] overlayfs: idmapped layers are currently not supported
	[Oct29 08:45] overlayfs: idmapped layers are currently not supported
	[Oct29 08:46] overlayfs: idmapped layers are currently not supported
	[Oct29 08:47] overlayfs: idmapped layers are currently not supported
	[  +4.220383] overlayfs: idmapped layers are currently not supported
	[Oct29 08:48] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c5012e77d5995d67461a19df092ba7b0617af55e88a4f413560ffb01b7c5dd86] <==
	{"level":"warn","ts":"2025-10-29T08:55:37.786299Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"b0fdec051931967a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:37.786354Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"b0fdec051931967a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:40.231758Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"b0fdec051931967a","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:40.231798Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"b0fdec051931967a","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:41.787909Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"b0fdec051931967a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:41.787976Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"b0fdec051931967a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:45.232786Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"b0fdec051931967a","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:45.232724Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"b0fdec051931967a","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:45.481653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:58622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:55:45.501921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:58644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:55:45.509326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:58648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:55:45.520416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:58656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:55:45.534388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:58672","server-name":"","error":"read tcp 192.168.49.2:2379->192.168.49.4:58672: read: connection reset by peer"}
	{"level":"warn","ts":"2025-10-29T08:55:45.534768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:58674","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-29T08:55:45.549450Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892 13886940692718237272)"}
	{"level":"info","ts":"2025-10-29T08:55:45.551570Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"b0fdec051931967a","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-10-29T08:55:45.551618Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"b0fdec051931967a"}
	{"level":"info","ts":"2025-10-29T08:55:45.551652Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"b0fdec051931967a"}
	{"level":"info","ts":"2025-10-29T08:55:45.551684Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"b0fdec051931967a"}
	{"level":"info","ts":"2025-10-29T08:55:45.551702Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"b0fdec051931967a"}
	{"level":"info","ts":"2025-10-29T08:55:45.551735Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b0fdec051931967a"}
	{"level":"info","ts":"2025-10-29T08:55:45.551763Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b0fdec051931967a"}
	{"level":"info","ts":"2025-10-29T08:55:45.551793Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"b0fdec051931967a"}
	{"level":"info","ts":"2025-10-29T08:55:45.551805Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"b0fdec051931967a"}
	{"level":"warn","ts":"2025-10-29T08:55:45.556358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:58682","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:55:52 up 38 min,  0 user,  load average: 1.55, 1.64, 1.43
	Linux ha-894836 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b59e1fb940c3f6ad37293176d85dd63473e5ac8494b7819987c7064627f6d94c] <==
	I1029 08:55:20.917053       1 main.go:324] Node ha-894836-m03 has CIDR [10.244.2.0/24] 
	I1029 08:55:20.917112       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1029 08:55:20.917123       1 main.go:324] Node ha-894836-m04 has CIDR [10.244.3.0/24] 
	I1029 08:55:30.916442       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:55:30.916483       1 main.go:301] handling current node
	I1029 08:55:30.916498       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1029 08:55:30.916503       1 main.go:324] Node ha-894836-m02 has CIDR [10.244.1.0/24] 
	I1029 08:55:30.916677       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1029 08:55:30.916691       1 main.go:324] Node ha-894836-m03 has CIDR [10.244.2.0/24] 
	I1029 08:55:30.916824       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1029 08:55:30.916838       1 main.go:324] Node ha-894836-m04 has CIDR [10.244.3.0/24] 
	I1029 08:55:40.923925       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1029 08:55:40.924058       1 main.go:324] Node ha-894836-m04 has CIDR [10.244.3.0/24] 
	I1029 08:55:40.924212       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:55:40.924253       1 main.go:301] handling current node
	I1029 08:55:40.924305       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1029 08:55:40.924368       1 main.go:324] Node ha-894836-m02 has CIDR [10.244.1.0/24] 
	I1029 08:55:40.924514       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1029 08:55:40.924552       1 main.go:324] Node ha-894836-m03 has CIDR [10.244.2.0/24] 
	I1029 08:55:50.916416       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:55:50.916453       1 main.go:301] handling current node
	I1029 08:55:50.916493       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1029 08:55:50.916501       1 main.go:324] Node ha-894836-m02 has CIDR [10.244.1.0/24] 
	I1029 08:55:50.916643       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1029 08:55:50.916658       1 main.go:324] Node ha-894836-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [e00d3f78d68d909f0332f199fdaf28199771c94a7e8d59cc954f4172c68c75fe] <==
	I1029 08:47:52.921966       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1029 08:47:52.919543       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1029 08:47:52.926729       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1029 08:47:52.926973       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1029 08:47:52.933488       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	W1029 08:47:52.938611       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1029 08:47:52.945598       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1029 08:47:52.946057       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1029 08:47:52.946083       1 policy_source.go:240] refreshing policies
	I1029 08:47:52.951298       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1029 08:47:52.951418       1 aggregator.go:171] initial CRD sync complete...
	I1029 08:47:52.951451       1 autoregister_controller.go:144] Starting autoregister controller
	I1029 08:47:52.951481       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1029 08:47:52.951508       1 cache.go:39] Caches are synced for autoregister controller
	I1029 08:47:52.977975       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 08:47:52.993034       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1029 08:47:53.040242       1 controller.go:667] quota admission added evaluator for: endpoints
	I1029 08:47:53.057186       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1029 08:47:53.065202       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1029 08:47:53.534383       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1029 08:47:53.979043       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1029 08:47:54.542753       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 08:47:59.474057       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1029 08:47:59.516146       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1029 08:47:59.654898       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [ffcbb54d6ce4436f5aec8bb9428ef3aa2b15fa9ee4079908fa14d7ee16acbc0c] <==
	I1029 08:47:55.875853       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1029 08:47:55.882183       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1029 08:47:55.882278       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1029 08:47:55.882348       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1029 08:47:55.883459       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1029 08:47:55.887670       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1029 08:47:55.890946       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1029 08:47:55.891049       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1029 08:47:55.892406       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1029 08:47:55.892502       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1029 08:47:55.892563       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-894836-m04"
	I1029 08:47:55.893176       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1029 08:47:55.894883       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1029 08:47:55.898545       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 08:47:55.898596       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1029 08:47:55.901253       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1029 08:47:55.901667       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1029 08:47:55.905025       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1029 08:47:55.905294       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1029 08:47:55.917000       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1029 08:48:42.390447       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-tqj79 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-tqj79\": the object has been modified; please apply your changes to the latest version and try again"
	I1029 08:48:42.392685       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"7aa7c40c-2de0-444b-84d5-38273baecd29", APIVersion:"v1", ResourceVersion:"311", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-tqj79 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-tqj79": the object has been modified; please apply your changes to the latest version and try again
	I1029 08:48:42.407658       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-tqj79 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-tqj79\": the object has been modified; please apply your changes to the latest version and try again"
	I1029 08:48:42.407815       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"7aa7c40c-2de0-444b-84d5-38273baecd29", APIVersion:"v1", ResourceVersion:"311", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-tqj79 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-tqj79": the object has been modified; please apply your changes to the latest version and try again
	I1029 08:53:56.021427       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-gmd49"
	
	
	==> kube-proxy [4ac7e4e48f2d67e6c26eb63b7aff7bf2e7c9e3065e9d277bfed197195815f419] <==
	I1029 08:48:00.832054       1 server_linux.go:53] "Using iptables proxy"
	I1029 08:48:01.014142       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 08:48:01.114385       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 08:48:01.114528       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1029 08:48:01.114683       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 08:48:01.305529       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 08:48:01.305578       1 server_linux.go:132] "Using iptables Proxier"
	I1029 08:48:01.412541       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 08:48:01.412931       1 server.go:527] "Version info" version="v1.34.1"
	I1029 08:48:01.413206       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 08:48:01.414509       1 config.go:200] "Starting service config controller"
	I1029 08:48:01.414592       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 08:48:01.414674       1 config.go:106] "Starting endpoint slice config controller"
	I1029 08:48:01.414708       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 08:48:01.414746       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 08:48:01.414771       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 08:48:01.437651       1 config.go:309] "Starting node config controller"
	I1029 08:48:01.437795       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 08:48:01.437892       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 08:48:01.521251       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 08:48:01.521390       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 08:48:01.521472       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [67e5abbb69757832239af83063ef76100de2cec956cd044965ac792572fce7d8] <==
	I1029 08:47:52.800319       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1029 08:47:52.800365       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 08:47:52.815921       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1029 08:47:52.816162       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 08:47:52.829112       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1029 08:47:52.834749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1029 08:47:52.816196       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1029 08:47:52.892990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1029 08:47:52.893149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1029 08:47:52.893207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1029 08:47:52.893255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1029 08:47:52.893310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1029 08:47:52.893364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1029 08:47:52.893406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1029 08:47:52.893454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1029 08:47:52.893501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1029 08:47:52.893542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1029 08:47:52.893586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1029 08:47:52.893632       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1029 08:47:52.893673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1029 08:47:52.893723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1029 08:47:52.893786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1029 08:47:52.893831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1029 08:47:52.893871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1029 08:47:52.934773       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 08:47:58 ha-894836 kubelet[799]: E1029 08:47:58.553127     799 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-894836\" already exists" pod="kube-system/etcd-ha-894836"
	Oct 29 08:47:58 ha-894836 kubelet[799]: I1029 08:47:58.553347     799 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ha-894836"
	Oct 29 08:47:58 ha-894836 kubelet[799]: E1029 08:47:58.581653     799 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-894836\" already exists" pod="kube-system/kube-apiserver-ha-894836"
	Oct 29 08:47:58 ha-894836 kubelet[799]: I1029 08:47:58.877519     799 apiserver.go:52] "Watching apiserver"
	Oct 29 08:47:58 ha-894836 kubelet[799]: I1029 08:47:58.897570     799 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-894836" podUID="3304e5b5-10a5-4362-855f-966f12e19513"
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.022914     799 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.027611     799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="984de04af66c2e9a91b240b1eee4ab93" path="/var/lib/kubelet/pods/984de04af66c2e9a91b240b1eee4ab93/volumes"
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.057848     799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/dea5c187-a5ec-4d74-90e1-bf9c51c5bb6f-cni-cfg\") pod \"kindnet-bjfp7\" (UID: \"dea5c187-a5ec-4d74-90e1-bf9c51c5bb6f\") " pod="kube-system/kindnet-bjfp7"
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.071318     799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dea5c187-a5ec-4d74-90e1-bf9c51c5bb6f-lib-modules\") pod \"kindnet-bjfp7\" (UID: \"dea5c187-a5ec-4d74-90e1-bf9c51c5bb6f\") " pod="kube-system/kindnet-bjfp7"
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.071556     799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0ef623f-f7ad-4b5a-8d1e-b08dc6d1ce80-lib-modules\") pod \"kube-proxy-gxrz7\" (UID: \"b0ef623f-f7ad-4b5a-8d1e-b08dc6d1ce80\") " pod="kube-system/kube-proxy-gxrz7"
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.071666     799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dea5c187-a5ec-4d74-90e1-bf9c51c5bb6f-xtables-lock\") pod \"kindnet-bjfp7\" (UID: \"dea5c187-a5ec-4d74-90e1-bf9c51c5bb6f\") " pod="kube-system/kindnet-bjfp7"
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.074936     799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0ef623f-f7ad-4b5a-8d1e-b08dc6d1ce80-xtables-lock\") pod \"kube-proxy-gxrz7\" (UID: \"b0ef623f-f7ad-4b5a-8d1e-b08dc6d1ce80\") " pod="kube-system/kube-proxy-gxrz7"
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.075062     799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/74a003fb-b5cc-4ffa-8560-fd41d1257bd6-tmp\") pod \"storage-provisioner\" (UID: \"74a003fb-b5cc-4ffa-8560-fd41d1257bd6\") " pod="kube-system/storage-provisioner"
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.085145     799 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-894836"
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.085320     799 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-894836"
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.188071     799 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 29 08:47:59 ha-894836 kubelet[799]: W1029 08:47:59.294117     799 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577/crio-fb9556b60baf7c523def4c79090adb05a1fc8173805d4bac0ef0573ad095f5af WatchSource:0}: Error finding container fb9556b60baf7c523def4c79090adb05a1fc8173805d4bac0ef0573ad095f5af: Status 404 returned error can't find the container with id fb9556b60baf7c523def4c79090adb05a1fc8173805d4bac0ef0573ad095f5af
	Oct 29 08:47:59 ha-894836 kubelet[799]: W1029 08:47:59.580481     799 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577/crio-662869c52a2c8133f956cfa328c8268d25d33d960ea2cf7acd20858704627dc0 WatchSource:0}: Error finding container 662869c52a2c8133f956cfa328c8268d25d33d960ea2cf7acd20858704627dc0: Status 404 returned error can't find the container with id 662869c52a2c8133f956cfa328c8268d25d33d960ea2cf7acd20858704627dc0
	Oct 29 08:47:59 ha-894836 kubelet[799]: W1029 08:47:59.659441     799 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577/crio-c6058cbca67d071839a960a649f1de901cec31652fc327f56667100a324eb7e5 WatchSource:0}: Error finding container c6058cbca67d071839a960a649f1de901cec31652fc327f56667100a324eb7e5: Status 404 returned error can't find the container with id c6058cbca67d071839a960a649f1de901cec31652fc327f56667100a324eb7e5
	Oct 29 08:47:59 ha-894836 kubelet[799]: W1029 08:47:59.686006     799 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577/crio-97069c7ad741e21a29e8b1c5b9e77d1159528e8e44e976bd587439e97920f6db WatchSource:0}: Error finding container 97069c7ad741e21a29e8b1c5b9e77d1159528e8e44e976bd587439e97920f6db: Status 404 returned error can't find the container with id 97069c7ad741e21a29e8b1c5b9e77d1159528e8e44e976bd587439e97920f6db
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.830139     799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-894836" podStartSLOduration=0.830111939 podStartE2EDuration="830.111939ms" podCreationTimestamp="2025-10-29 08:47:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 08:47:59.71182927 +0000 UTC m=+30.969423145" watchObservedRunningTime="2025-10-29 08:47:59.830111939 +0000 UTC m=+31.087705806"
	Oct 29 08:47:59 ha-894836 kubelet[799]: W1029 08:47:59.917098     799 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577/crio-541c10c0d9e9d889360a0c967d7f0004f27a9816efc8471371b080bd9c9e5b68 WatchSource:0}: Error finding container 541c10c0d9e9d889360a0c967d7f0004f27a9816efc8471371b080bd9c9e5b68: Status 404 returned error can't find the container with id 541c10c0d9e9d889360a0c967d7f0004f27a9816efc8471371b080bd9c9e5b68
	Oct 29 08:48:28 ha-894836 kubelet[799]: E1029 08:48:28.877765     799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00333e33883fd76a53b14a1f8680fa8d01d5e0e724d961b7eeaeb3a0a4a4ed6b\": container with ID starting with 00333e33883fd76a53b14a1f8680fa8d01d5e0e724d961b7eeaeb3a0a4a4ed6b not found: ID does not exist" containerID="00333e33883fd76a53b14a1f8680fa8d01d5e0e724d961b7eeaeb3a0a4a4ed6b"
	Oct 29 08:48:28 ha-894836 kubelet[799]: I1029 08:48:28.877826     799 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="00333e33883fd76a53b14a1f8680fa8d01d5e0e724d961b7eeaeb3a0a4a4ed6b" err="rpc error: code = NotFound desc = could not find container \"00333e33883fd76a53b14a1f8680fa8d01d5e0e724d961b7eeaeb3a0a4a4ed6b\": container with ID starting with 00333e33883fd76a53b14a1f8680fa8d01d5e0e724d961b7eeaeb3a0a4a4ed6b not found: ID does not exist"
	Oct 29 08:48:31 ha-894836 kubelet[799]: I1029 08:48:31.401594     799 scope.go:117] "RemoveContainer" containerID="69e1be8c137eda9847c41a23a137e76dd93f5a10225b59b8180411d6cb08e5d4"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-894836 -n ha-894836
helpers_test.go:269: (dbg) Run:  kubectl --context ha-894836 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-wpcg6
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-894836 describe pod busybox-7b57f96db7-wpcg6
helpers_test.go:290: (dbg) kubectl --context ha-894836 describe pod busybox-7b57f96db7-wpcg6:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-wpcg6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m9tsv (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-m9tsv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  117s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  117s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  9s    default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  9s    default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (9.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (3.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:415: expected profile "ha-894836" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-894836\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-894836\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-894836\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{
\"Name\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\
"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"Sta
ticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-894836
helpers_test.go:243: (dbg) docker inspect ha-894836:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577",
	        "Created": "2025-10-29T08:41:13.884631643Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 51767,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T08:47:21.800876334Z",
	            "FinishedAt": "2025-10-29T08:47:21.16806896Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577/hostname",
	        "HostsPath": "/var/lib/docker/containers/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577/hosts",
	        "LogPath": "/var/lib/docker/containers/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577-json.log",
	        "Name": "/ha-894836",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-894836:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-894836",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577",
	                "LowerDir": "/var/lib/docker/overlay2/6cb7d98797bde16eca0f4bec3498bd7eec3437fba9aba27a2de6d3809021a168-init/diff:/var/lib/docker/overlay2/512c003c31c6c889d7180677d0a7d04f9641d651bb7c0f829b95ca7e47c2836c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6cb7d98797bde16eca0f4bec3498bd7eec3437fba9aba27a2de6d3809021a168/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6cb7d98797bde16eca0f4bec3498bd7eec3437fba9aba27a2de6d3809021a168/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6cb7d98797bde16eca0f4bec3498bd7eec3437fba9aba27a2de6d3809021a168/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-894836",
	                "Source": "/var/lib/docker/volumes/ha-894836/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-894836",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-894836",
	                "name.minikube.sigs.k8s.io": "ha-894836",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f6e74e15151ebcdec78f0c531e590064d6bb05fc075b51560c345f672aa3c577",
	            "SandboxKey": "/var/run/docker/netns/f6e74e15151e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32808"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32809"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32812"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32810"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32811"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-894836": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:33:dd:d4:71:59",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0687088684ea4c5a5709e0ca87c1a9ca99a57d381b08036eb4f13d9a4d606eb4",
	                    "EndpointID": "8936c5bd5e09c1315f13d32a72ef61578012dcc563588dd57720a11fcdb4992e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-894836",
	                        "40404985106a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-894836 -n ha-894836
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-894836 logs -n 25: (1.285360028s)
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-894836 ssh -n ha-894836-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836-m02 sudo cat /home/docker/cp-test_ha-894836-m03_ha-894836-m02.txt                                         │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ cp      │ ha-894836 cp ha-894836-m03:/home/docker/cp-test.txt ha-894836-m04:/home/docker/cp-test_ha-894836-m03_ha-894836-m04.txt               │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836-m04 sudo cat /home/docker/cp-test_ha-894836-m03_ha-894836-m04.txt                                         │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ cp      │ ha-894836 cp testdata/cp-test.txt ha-894836-m04:/home/docker/cp-test.txt                                                             │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ cp      │ ha-894836 cp ha-894836-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1145660143/001/cp-test_ha-894836-m04.txt │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ cp      │ ha-894836 cp ha-894836-m04:/home/docker/cp-test.txt ha-894836:/home/docker/cp-test_ha-894836-m04_ha-894836.txt                       │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836 sudo cat /home/docker/cp-test_ha-894836-m04_ha-894836.txt                                                 │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ cp      │ ha-894836 cp ha-894836-m04:/home/docker/cp-test.txt ha-894836-m02:/home/docker/cp-test_ha-894836-m04_ha-894836-m02.txt               │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836-m02 sudo cat /home/docker/cp-test_ha-894836-m04_ha-894836-m02.txt                                         │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ cp      │ ha-894836 cp ha-894836-m04:/home/docker/cp-test.txt ha-894836-m03:/home/docker/cp-test_ha-894836-m04_ha-894836-m03.txt               │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ ssh     │ ha-894836 ssh -n ha-894836-m03 sudo cat /home/docker/cp-test_ha-894836-m04_ha-894836-m03.txt                                         │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ node    │ ha-894836 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ node    │ ha-894836 node start m02 --alsologtostderr -v 5                                                                                      │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:46 UTC │
	│ node    │ ha-894836 node list --alsologtostderr -v 5                                                                                           │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │                     │
	│ stop    │ ha-894836 stop --alsologtostderr -v 5                                                                                                │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:46 UTC │ 29 Oct 25 08:47 UTC │
	│ start   │ ha-894836 start --wait true --alsologtostderr -v 5                                                                                   │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:47 UTC │                     │
	│ node    │ ha-894836 node list --alsologtostderr -v 5                                                                                           │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:55 UTC │                     │
	│ node    │ ha-894836 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-894836 │ jenkins │ v1.37.0 │ 29 Oct 25 08:55 UTC │ 29 Oct 25 08:55 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 08:47:21
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 08:47:21.529499   51643 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:47:21.529606   51643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:47:21.529621   51643 out.go:374] Setting ErrFile to fd 2...
	I1029 08:47:21.529626   51643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:47:21.529872   51643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 08:47:21.530226   51643 out.go:368] Setting JSON to false
	I1029 08:47:21.531000   51643 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1793,"bootTime":1761725848,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1029 08:47:21.531062   51643 start.go:143] virtualization:  
	I1029 08:47:21.534496   51643 out.go:179] * [ha-894836] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1029 08:47:21.538440   51643 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 08:47:21.538583   51643 notify.go:221] Checking for updates...
	I1029 08:47:21.544526   51643 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 08:47:21.547326   51643 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 08:47:21.550152   51643 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	I1029 08:47:21.553042   51643 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1029 08:47:21.555854   51643 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 08:47:21.559195   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:47:21.559391   51643 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 08:47:21.590221   51643 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1029 08:47:21.590337   51643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:47:21.646530   51643 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-29 08:47:21.636887182 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 08:47:21.646636   51643 docker.go:319] overlay module found
	I1029 08:47:21.651571   51643 out.go:179] * Using the docker driver based on existing profile
	I1029 08:47:21.654406   51643 start.go:309] selected driver: docker
	I1029 08:47:21.654426   51643 start.go:930] validating driver "docker" against &{Name:ha-894836 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-894836 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 08:47:21.654576   51643 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 08:47:21.654673   51643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:47:21.713521   51643 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-29 08:47:21.703756989 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 08:47:21.713963   51643 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 08:47:21.713998   51643 cni.go:84] Creating CNI manager for ""
	I1029 08:47:21.714048   51643 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1029 08:47:21.714093   51643 start.go:353] cluster config:
	{Name:ha-894836 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-894836 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 08:47:21.719068   51643 out.go:179] * Starting "ha-894836" primary control-plane node in "ha-894836" cluster
	I1029 08:47:21.721819   51643 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 08:47:21.724835   51643 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 08:47:21.727599   51643 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:47:21.727626   51643 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 08:47:21.727647   51643 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1029 08:47:21.727666   51643 cache.go:59] Caching tarball of preloaded images
	I1029 08:47:21.727743   51643 preload.go:233] Found /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1029 08:47:21.727753   51643 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 08:47:21.727909   51643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/config.json ...
	I1029 08:47:21.745168   51643 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 08:47:21.745191   51643 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 08:47:21.745207   51643 cache.go:233] Successfully downloaded all kic artifacts
	I1029 08:47:21.745229   51643 start.go:360] acquireMachinesLock for ha-894836: {Name:mk81ec6bdb62bf512bc2903a97ef9ba531fecfa0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 08:47:21.745296   51643 start.go:364] duration metric: took 49.552µs to acquireMachinesLock for "ha-894836"
	I1029 08:47:21.745320   51643 start.go:96] Skipping create...Using existing machine configuration
	I1029 08:47:21.745329   51643 fix.go:54] fixHost starting: 
	I1029 08:47:21.745587   51643 cli_runner.go:164] Run: docker container inspect ha-894836 --format={{.State.Status}}
	I1029 08:47:21.762859   51643 fix.go:112] recreateIfNeeded on ha-894836: state=Stopped err=<nil>
	W1029 08:47:21.762919   51643 fix.go:138] unexpected machine state, will restart: <nil>
	I1029 08:47:21.766255   51643 out.go:252] * Restarting existing docker container for "ha-894836" ...
	I1029 08:47:21.766345   51643 cli_runner.go:164] Run: docker start ha-894836
	I1029 08:47:22.012669   51643 cli_runner.go:164] Run: docker container inspect ha-894836 --format={{.State.Status}}
	I1029 08:47:22.033117   51643 kic.go:430] container "ha-894836" state is running.
	I1029 08:47:22.033526   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836
	I1029 08:47:22.057333   51643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/config.json ...
	I1029 08:47:22.057589   51643 machine.go:94] provisionDockerMachine start ...
	I1029 08:47:22.057651   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:22.080561   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:22.080896   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1029 08:47:22.080906   51643 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 08:47:22.081644   51643 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1029 08:47:25.232635   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-894836
	
	I1029 08:47:25.232719   51643 ubuntu.go:182] provisioning hostname "ha-894836"
	I1029 08:47:25.232811   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:25.251060   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:25.251387   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1029 08:47:25.251404   51643 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-894836 && echo "ha-894836" | sudo tee /etc/hostname
	I1029 08:47:25.413694   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-894836
	
	I1029 08:47:25.413779   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:25.431658   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:25.431987   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1029 08:47:25.432010   51643 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-894836' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-894836/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-894836' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 08:47:25.580597   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 08:47:25.580622   51643 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-2763/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-2763/.minikube}
	I1029 08:47:25.580654   51643 ubuntu.go:190] setting up certificates
	I1029 08:47:25.580671   51643 provision.go:84] configureAuth start
	I1029 08:47:25.580734   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836
	I1029 08:47:25.598256   51643 provision.go:143] copyHostCerts
	I1029 08:47:25.598293   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 08:47:25.598330   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem, removing ...
	I1029 08:47:25.598336   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 08:47:25.598412   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem (1082 bytes)
	I1029 08:47:25.598503   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 08:47:25.598519   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem, removing ...
	I1029 08:47:25.598523   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 08:47:25.598549   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem (1123 bytes)
	I1029 08:47:25.598597   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 08:47:25.598618   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem, removing ...
	I1029 08:47:25.598622   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 08:47:25.598646   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem (1679 bytes)
	I1029 08:47:25.598700   51643 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem org=jenkins.ha-894836 san=[127.0.0.1 192.168.49.2 ha-894836 localhost minikube]
	I1029 08:47:26.140516   51643 provision.go:177] copyRemoteCerts
	I1029 08:47:26.140603   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 08:47:26.140697   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:26.157969   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836/id_rsa Username:docker}
	I1029 08:47:26.259769   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1029 08:47:26.259831   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 08:47:26.276774   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1029 08:47:26.276833   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 08:47:26.294325   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1029 08:47:26.294387   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1029 08:47:26.312588   51643 provision.go:87] duration metric: took 731.894787ms to configureAuth
	I1029 08:47:26.312652   51643 ubuntu.go:206] setting minikube options for container-runtime
	I1029 08:47:26.312914   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:47:26.313019   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:26.330542   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:26.330847   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1029 08:47:26.330868   51643 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 08:47:26.749842   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 08:47:26.749867   51643 machine.go:97] duration metric: took 4.692267534s to provisionDockerMachine
	I1029 08:47:26.749878   51643 start.go:293] postStartSetup for "ha-894836" (driver="docker")
	I1029 08:47:26.749923   51643 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 08:47:26.750004   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 08:47:26.750092   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:26.771117   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836/id_rsa Username:docker}
	I1029 08:47:26.878934   51643 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 08:47:26.882605   51643 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 08:47:26.882634   51643 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 08:47:26.882646   51643 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/addons for local assets ...
	I1029 08:47:26.882718   51643 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/files for local assets ...
	I1029 08:47:26.882831   51643 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> 45502.pem in /etc/ssl/certs
	I1029 08:47:26.882843   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> /etc/ssl/certs/45502.pem
	I1029 08:47:26.882991   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 08:47:26.891148   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /etc/ssl/certs/45502.pem (1708 bytes)
	I1029 08:47:26.909280   51643 start.go:296] duration metric: took 159.355379ms for postStartSetup
	I1029 08:47:26.909405   51643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 08:47:26.909466   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:26.925846   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836/id_rsa Username:docker}
	I1029 08:47:27.025507   51643 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 08:47:27.030364   51643 fix.go:56] duration metric: took 5.285027579s for fixHost
	I1029 08:47:27.030393   51643 start.go:83] releasing machines lock for "ha-894836", held for 5.285083572s
	I1029 08:47:27.030473   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836
	I1029 08:47:27.046867   51643 ssh_runner.go:195] Run: cat /version.json
	I1029 08:47:27.046908   51643 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 08:47:27.046925   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:27.046972   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:27.072712   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836/id_rsa Username:docker}
	I1029 08:47:27.075970   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836/id_rsa Username:docker}
	I1029 08:47:27.176083   51643 ssh_runner.go:195] Run: systemctl --version
	I1029 08:47:27.271259   51643 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 08:47:27.306996   51643 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 08:47:27.311297   51643 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 08:47:27.311362   51643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 08:47:27.318983   51643 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1029 08:47:27.319008   51643 start.go:496] detecting cgroup driver to use...
	I1029 08:47:27.319038   51643 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1029 08:47:27.319083   51643 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 08:47:27.334445   51643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 08:47:27.347545   51643 docker.go:218] disabling cri-docker service (if available) ...
	I1029 08:47:27.347636   51643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 08:47:27.363332   51643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 08:47:27.376173   51643 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 08:47:27.492370   51643 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 08:47:27.612596   51643 docker.go:234] disabling docker service ...
	I1029 08:47:27.612724   51643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 08:47:27.628742   51643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 08:47:27.643114   51643 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 08:47:27.769923   51643 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 08:47:27.894105   51643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 08:47:27.906720   51643 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 08:47:27.921611   51643 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 08:47:27.921734   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:27.930389   51643 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1029 08:47:27.930505   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:27.939285   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:27.947870   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:27.956623   51643 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 08:47:27.965519   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:27.974392   51643 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:27.982657   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:27.991382   51643 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 08:47:27.999251   51643 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 08:47:28.008477   51643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:47:28.138673   51643 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 08:47:28.265137   51643 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 08:47:28.265257   51643 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 08:47:28.269363   51643 start.go:564] Will wait 60s for crictl version
	I1029 08:47:28.269468   51643 ssh_runner.go:195] Run: which crictl
	I1029 08:47:28.273391   51643 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 08:47:28.298305   51643 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 08:47:28.298482   51643 ssh_runner.go:195] Run: crio --version
	I1029 08:47:28.332193   51643 ssh_runner.go:195] Run: crio --version
	I1029 08:47:28.363359   51643 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 08:47:28.366252   51643 cli_runner.go:164] Run: docker network inspect ha-894836 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 08:47:28.382546   51643 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1029 08:47:28.386569   51643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:47:28.396854   51643 kubeadm.go:884] updating cluster {Name:ha-894836 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-894836 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 08:47:28.397006   51643 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:47:28.397068   51643 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 08:47:28.434678   51643 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 08:47:28.434703   51643 crio.go:433] Images already preloaded, skipping extraction
	I1029 08:47:28.434770   51643 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 08:47:28.460074   51643 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 08:47:28.460096   51643 cache_images.go:86] Images are preloaded, skipping loading
	I1029 08:47:28.460105   51643 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1029 08:47:28.460221   51643 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-894836 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-894836 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 08:47:28.460331   51643 ssh_runner.go:195] Run: crio config
	I1029 08:47:28.513402   51643 cni.go:84] Creating CNI manager for ""
	I1029 08:47:28.513423   51643 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1029 08:47:28.513438   51643 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1029 08:47:28.513462   51643 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-894836 NodeName:ha-894836 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 08:47:28.513598   51643 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-894836"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 08:47:28.513621   51643 kube-vip.go:115] generating kube-vip config ...
	I1029 08:47:28.513670   51643 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1029 08:47:28.525412   51643 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1029 08:47:28.525541   51643 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1029 08:47:28.525629   51643 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 08:47:28.533537   51643 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 08:47:28.533649   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1029 08:47:28.541256   51643 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1029 08:47:28.554128   51643 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 08:47:28.567304   51643 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1029 08:47:28.580046   51643 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1029 08:47:28.592794   51643 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1029 08:47:28.596388   51643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:47:28.605938   51643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:47:28.721205   51643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 08:47:28.736487   51643 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836 for IP: 192.168.49.2
	I1029 08:47:28.736507   51643 certs.go:195] generating shared ca certs ...
	I1029 08:47:28.736536   51643 certs.go:227] acquiring lock for ca certs: {Name:mk715611293338c39436fe9072c5ce0c2fb993a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:47:28.736703   51643 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key
	I1029 08:47:28.736755   51643 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key
	I1029 08:47:28.736768   51643 certs.go:257] generating profile certs ...
	I1029 08:47:28.736855   51643 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.key
	I1029 08:47:28.736885   51643 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key.9555b31c
	I1029 08:47:28.736902   51643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt.9555b31c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1029 08:47:29.326544   51643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt.9555b31c ...
	I1029 08:47:29.326575   51643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt.9555b31c: {Name:mk2c66c1b3a93815ffa793a9ebfc638bd973efe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:47:29.326766   51643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key.9555b31c ...
	I1029 08:47:29.326783   51643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key.9555b31c: {Name:mk64676774836dc306d0667653f14bbfbbb06e3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:47:29.326872   51643 certs.go:382] copying /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt.9555b31c -> /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt
	I1029 08:47:29.327021   51643 certs.go:386] copying /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key.9555b31c -> /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key
	I1029 08:47:29.327155   51643 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key
	I1029 08:47:29.327173   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1029 08:47:29.327190   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1029 08:47:29.327208   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1029 08:47:29.327227   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1029 08:47:29.327243   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1029 08:47:29.327257   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1029 08:47:29.327275   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1029 08:47:29.327286   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1029 08:47:29.327336   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem (1338 bytes)
	W1029 08:47:29.327368   51643 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550_empty.pem, impossibly tiny 0 bytes
	I1029 08:47:29.327380   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem (1679 bytes)
	I1029 08:47:29.327404   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem (1082 bytes)
	I1029 08:47:29.327429   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem (1123 bytes)
	I1029 08:47:29.327455   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem (1679 bytes)
	I1029 08:47:29.327499   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem (1708 bytes)
	I1029 08:47:29.327529   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem -> /usr/share/ca-certificates/4550.pem
	I1029 08:47:29.327546   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> /usr/share/ca-certificates/45502.pem
	I1029 08:47:29.327560   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:47:29.328197   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 08:47:29.346024   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 08:47:29.368215   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 08:47:29.401494   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1029 08:47:29.429372   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1029 08:47:29.456963   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 08:47:29.488058   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 08:47:29.518940   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1029 08:47:29.566867   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem --> /usr/share/ca-certificates/4550.pem (1338 bytes)
	I1029 08:47:29.611519   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /usr/share/ca-certificates/45502.pem (1708 bytes)
	I1029 08:47:29.660809   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 08:47:29.699081   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 08:47:29.722213   51643 ssh_runner.go:195] Run: openssl version
	I1029 08:47:29.732266   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4550.pem && ln -fs /usr/share/ca-certificates/4550.pem /etc/ssl/certs/4550.pem"
	I1029 08:47:29.745012   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4550.pem
	I1029 08:47:29.751640   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:27 /usr/share/ca-certificates/4550.pem
	I1029 08:47:29.751710   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4550.pem
	I1029 08:47:29.814511   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4550.pem /etc/ssl/certs/51391683.0"
	I1029 08:47:29.826133   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/45502.pem && ln -fs /usr/share/ca-certificates/45502.pem /etc/ssl/certs/45502.pem"
	I1029 08:47:29.838154   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/45502.pem
	I1029 08:47:29.844165   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:27 /usr/share/ca-certificates/45502.pem
	I1029 08:47:29.844232   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/45502.pem
	I1029 08:47:29.905999   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/45502.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 08:47:29.913848   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 08:47:29.924235   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:47:29.932561   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:47:29.932629   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:47:29.989153   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 08:47:29.997241   51643 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 08:47:30.008565   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1029 08:47:30.100996   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1029 08:47:30.148023   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1029 08:47:30.205555   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1029 08:47:30.248683   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1029 08:47:30.291195   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1029 08:47:30.333318   51643 kubeadm.go:401] StartCluster: {Name:ha-894836 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-894836 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 08:47:30.333452   51643 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:47:30.333514   51643 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:47:30.363953   51643 cri.go:89] found id: "e00d3f78d68d909f0332f199fdaf28199771c94a7e8d59cc954f4172c68c75fe"
	I1029 08:47:30.363975   51643 cri.go:89] found id: "a917c056972ea87cbf263c90930d10cb54f7d7c4f044215f8091e6dc6ec698fe"
	I1029 08:47:30.363981   51643 cri.go:89] found id: "67e5abbb69757832239af83063ef76100de2cec956cd044965ac792572fce7d8"
	I1029 08:47:30.363984   51643 cri.go:89] found id: "ffcbb54d6ce4436f5aec8bb9428ef3aa2b15fa9ee4079908fa14d7ee16acbc0c"
	I1029 08:47:30.363987   51643 cri.go:89] found id: "c5012e77d5995d67461a19df092ba7b0617af55e88a4f413560ffb01b7c5dd86"
	I1029 08:47:30.363991   51643 cri.go:89] found id: ""
	I1029 08:47:30.364037   51643 ssh_runner.go:195] Run: sudo runc list -f json
	W1029 08:47:30.375323   51643 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:47:30Z" level=error msg="open /run/runc: no such file or directory"
	I1029 08:47:30.375401   51643 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 08:47:30.385470   51643 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1029 08:47:30.385492   51643 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1029 08:47:30.385554   51643 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1029 08:47:30.394291   51643 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1029 08:47:30.394701   51643 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-894836" does not appear in /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 08:47:30.394803   51643 kubeconfig.go:62] /home/jenkins/minikube-integration/21800-2763/kubeconfig needs updating (will repair): [kubeconfig missing "ha-894836" cluster setting kubeconfig missing "ha-894836" context setting]
	I1029 08:47:30.395074   51643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:47:30.395601   51643 kapi.go:59] client config for ha-894836: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.crt", KeyFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.key", CAFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1029 08:47:30.396079   51643 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1029 08:47:30.396100   51643 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1029 08:47:30.396107   51643 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1029 08:47:30.396112   51643 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1029 08:47:30.396116   51643 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1029 08:47:30.396600   51643 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1029 08:47:30.396732   51643 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1029 08:47:30.405937   51643 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1029 08:47:30.405963   51643 kubeadm.go:602] duration metric: took 20.455594ms to restartPrimaryControlPlane
	I1029 08:47:30.405973   51643 kubeadm.go:403] duration metric: took 72.664815ms to StartCluster
	I1029 08:47:30.405988   51643 settings.go:142] acquiring lock: {Name:mkeba1c0d8e3656c2a05b6b6f1f81184498df216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:47:30.406062   51643 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 08:47:30.406653   51643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:47:30.406844   51643 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 08:47:30.406872   51643 start.go:242] waiting for startup goroutines ...
	I1029 08:47:30.406887   51643 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 08:47:30.407409   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:47:30.412586   51643 out.go:179] * Enabled addons: 
	I1029 08:47:30.415502   51643 addons.go:515] duration metric: took 8.615131ms for enable addons: enabled=[]
	I1029 08:47:30.415550   51643 start.go:247] waiting for cluster config update ...
	I1029 08:47:30.415564   51643 start.go:256] writing updated cluster config ...
	I1029 08:47:30.418838   51643 out.go:203] 
	I1029 08:47:30.421986   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:47:30.422163   51643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/config.json ...
	I1029 08:47:30.425622   51643 out.go:179] * Starting "ha-894836-m02" control-plane node in "ha-894836" cluster
	I1029 08:47:30.428500   51643 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 08:47:30.431446   51643 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 08:47:30.434321   51643 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:47:30.434374   51643 cache.go:59] Caching tarball of preloaded images
	I1029 08:47:30.434516   51643 preload.go:233] Found /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1029 08:47:30.434549   51643 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 08:47:30.434704   51643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/config.json ...
	I1029 08:47:30.434965   51643 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 08:47:30.469091   51643 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 08:47:30.469113   51643 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 08:47:30.469126   51643 cache.go:233] Successfully downloaded all kic artifacts
	I1029 08:47:30.469150   51643 start.go:360] acquireMachinesLock for ha-894836-m02: {Name:mkb930aec8192c14094c9c711c93e26847bf9202 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 08:47:30.469207   51643 start.go:364] duration metric: took 40.936µs to acquireMachinesLock for "ha-894836-m02"
	I1029 08:47:30.469228   51643 start.go:96] Skipping create...Using existing machine configuration
	I1029 08:47:30.469233   51643 fix.go:54] fixHost starting: m02
	I1029 08:47:30.469504   51643 cli_runner.go:164] Run: docker container inspect ha-894836-m02 --format={{.State.Status}}
	I1029 08:47:30.500880   51643 fix.go:112] recreateIfNeeded on ha-894836-m02: state=Stopped err=<nil>
	W1029 08:47:30.500905   51643 fix.go:138] unexpected machine state, will restart: <nil>
	I1029 08:47:30.506548   51643 out.go:252] * Restarting existing docker container for "ha-894836-m02" ...
	I1029 08:47:30.506637   51643 cli_runner.go:164] Run: docker start ha-894836-m02
	I1029 08:47:30.853634   51643 cli_runner.go:164] Run: docker container inspect ha-894836-m02 --format={{.State.Status}}
	I1029 08:47:30.880386   51643 kic.go:430] container "ha-894836-m02" state is running.
	I1029 08:47:30.880745   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836-m02
	I1029 08:47:30.905743   51643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/config.json ...
	I1029 08:47:30.905982   51643 machine.go:94] provisionDockerMachine start ...
	I1029 08:47:30.906048   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:30.933559   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:30.933904   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1029 08:47:30.933913   51643 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 08:47:30.934536   51643 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55068->127.0.0.1:32813: read: connection reset by peer
	I1029 08:47:34.203957   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-894836-m02
	
	I1029 08:47:34.204004   51643 ubuntu.go:182] provisioning hostname "ha-894836-m02"
	I1029 08:47:34.204076   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:34.234369   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:34.234685   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1029 08:47:34.234703   51643 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-894836-m02 && echo "ha-894836-m02" | sudo tee /etc/hostname
	I1029 08:47:34.542369   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-894836-m02
	
	I1029 08:47:34.542516   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:34.574456   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:34.574762   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1029 08:47:34.574779   51643 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-894836-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-894836-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-894836-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 08:47:34.827546   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 08:47:34.827578   51643 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-2763/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-2763/.minikube}
	I1029 08:47:34.827603   51643 ubuntu.go:190] setting up certificates
	I1029 08:47:34.827638   51643 provision.go:84] configureAuth start
	I1029 08:47:34.827714   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836-m02
	I1029 08:47:34.862097   51643 provision.go:143] copyHostCerts
	I1029 08:47:34.862139   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 08:47:34.862171   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem, removing ...
	I1029 08:47:34.862183   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 08:47:34.862258   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem (1082 bytes)
	I1029 08:47:34.862339   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 08:47:34.862362   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem, removing ...
	I1029 08:47:34.862367   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 08:47:34.862394   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem (1123 bytes)
	I1029 08:47:34.862440   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 08:47:34.862461   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem, removing ...
	I1029 08:47:34.862469   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 08:47:34.862496   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem (1679 bytes)
	I1029 08:47:34.862545   51643 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem org=jenkins.ha-894836-m02 san=[127.0.0.1 192.168.49.3 ha-894836-m02 localhost minikube]
	I1029 08:47:35.182658   51643 provision.go:177] copyRemoteCerts
	I1029 08:47:35.182745   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 08:47:35.182793   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:35.201881   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m02/id_rsa Username:docker}
	I1029 08:47:35.346712   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1029 08:47:35.346775   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 08:47:35.384129   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1029 08:47:35.384198   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1029 08:47:35.415588   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1029 08:47:35.415653   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 08:47:35.457021   51643 provision.go:87] duration metric: took 629.369458ms to configureAuth
	I1029 08:47:35.457058   51643 ubuntu.go:206] setting minikube options for container-runtime
	I1029 08:47:35.457378   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:47:35.457501   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:35.485978   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:35.486288   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1029 08:47:35.486309   51643 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 08:47:35.984048   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 08:47:35.984077   51643 machine.go:97] duration metric: took 5.078076838s to provisionDockerMachine
	I1029 08:47:35.984093   51643 start.go:293] postStartSetup for "ha-894836-m02" (driver="docker")
	I1029 08:47:35.984105   51643 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 08:47:35.984167   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 08:47:35.984212   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:36.009654   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m02/id_rsa Username:docker}
	I1029 08:47:36.121479   51643 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 08:47:36.125706   51643 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 08:47:36.125737   51643 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 08:47:36.125748   51643 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/addons for local assets ...
	I1029 08:47:36.125802   51643 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/files for local assets ...
	I1029 08:47:36.125883   51643 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> 45502.pem in /etc/ssl/certs
	I1029 08:47:36.125902   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> /etc/ssl/certs/45502.pem
	I1029 08:47:36.126006   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 08:47:36.133908   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /etc/ssl/certs/45502.pem (1708 bytes)
	I1029 08:47:36.152562   51643 start.go:296] duration metric: took 168.452944ms for postStartSetup
	I1029 08:47:36.152710   51643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 08:47:36.152752   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:36.170976   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m02/id_rsa Username:docker}
	I1029 08:47:36.276973   51643 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 08:47:36.287814   51643 fix.go:56] duration metric: took 5.818573756s for fixHost
	I1029 08:47:36.287841   51643 start.go:83] releasing machines lock for "ha-894836-m02", held for 5.818626179s
	I1029 08:47:36.287916   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836-m02
	I1029 08:47:36.328488   51643 out.go:179] * Found network options:
	I1029 08:47:36.331520   51643 out.go:179]   - NO_PROXY=192.168.49.2
	W1029 08:47:36.337513   51643 proxy.go:120] fail to check proxy env: Error ip not in block
	W1029 08:47:36.337573   51643 proxy.go:120] fail to check proxy env: Error ip not in block
	I1029 08:47:36.337636   51643 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 08:47:36.337690   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:36.337952   51643 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 08:47:36.338007   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m02
	I1029 08:47:36.372705   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m02/id_rsa Username:docker}
	I1029 08:47:36.382161   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m02/id_rsa Username:docker}
	I1029 08:47:36.725650   51643 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 08:47:36.732748   51643 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 08:47:36.732831   51643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 08:47:36.748828   51643 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1029 08:47:36.748854   51643 start.go:496] detecting cgroup driver to use...
	I1029 08:47:36.748899   51643 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1029 08:47:36.748976   51643 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 08:47:36.774113   51643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 08:47:36.799926   51643 docker.go:218] disabling cri-docker service (if available) ...
	I1029 08:47:36.800009   51643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 08:47:36.821641   51643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 08:47:36.838818   51643 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 08:47:37.085073   51643 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 08:47:37.283501   51643 docker.go:234] disabling docker service ...
	I1029 08:47:37.283581   51643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 08:47:37.306704   51643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 08:47:37.329115   51643 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 08:47:37.528935   51643 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 08:47:37.724811   51643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 08:47:37.745385   51643 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 08:47:37.766616   51643 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 08:47:37.766687   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:37.777687   51643 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1029 08:47:37.777763   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:37.790547   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:37.805597   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:37.824888   51643 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 08:47:37.833592   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:37.847509   51643 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:37.857690   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:47:37.870682   51643 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 08:47:37.881416   51643 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 08:47:37.893784   51643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:47:38.130979   51643 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 08:47:38.346041   51643 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 08:47:38.346156   51643 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 08:47:38.350264   51643 start.go:564] Will wait 60s for crictl version
	I1029 08:47:38.350326   51643 ssh_runner.go:195] Run: which crictl
	I1029 08:47:38.353928   51643 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 08:47:38.381039   51643 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 08:47:38.381134   51643 ssh_runner.go:195] Run: crio --version
	I1029 08:47:38.409799   51643 ssh_runner.go:195] Run: crio --version
	I1029 08:47:38.443728   51643 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 08:47:38.446621   51643 out.go:179]   - env NO_PROXY=192.168.49.2
	I1029 08:47:38.449812   51643 cli_runner.go:164] Run: docker network inspect ha-894836 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 08:47:38.466711   51643 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1029 08:47:38.470765   51643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:47:38.480879   51643 mustload.go:66] Loading cluster: ha-894836
	I1029 08:47:38.481131   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:47:38.481434   51643 cli_runner.go:164] Run: docker container inspect ha-894836 --format={{.State.Status}}
	I1029 08:47:38.498248   51643 host.go:66] Checking if "ha-894836" exists ...
	I1029 08:47:38.498544   51643 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836 for IP: 192.168.49.3
	I1029 08:47:38.498558   51643 certs.go:195] generating shared ca certs ...
	I1029 08:47:38.498572   51643 certs.go:227] acquiring lock for ca certs: {Name:mk715611293338c39436fe9072c5ce0c2fb993a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:47:38.498695   51643 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key
	I1029 08:47:38.498747   51643 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key
	I1029 08:47:38.498755   51643 certs.go:257] generating profile certs ...
	I1029 08:47:38.498831   51643 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.key
	I1029 08:47:38.498903   51643 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key.d4a7ec17
	I1029 08:47:38.498943   51643 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key
	I1029 08:47:38.498962   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1029 08:47:38.498975   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1029 08:47:38.498991   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1029 08:47:38.499002   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1029 08:47:38.499012   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1029 08:47:38.499039   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1029 08:47:38.499054   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1029 08:47:38.499064   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1029 08:47:38.499118   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem (1338 bytes)
	W1029 08:47:38.499148   51643 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550_empty.pem, impossibly tiny 0 bytes
	I1029 08:47:38.499158   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem (1679 bytes)
	I1029 08:47:38.499189   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem (1082 bytes)
	I1029 08:47:38.499215   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem (1123 bytes)
	I1029 08:47:38.499239   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem (1679 bytes)
	I1029 08:47:38.499284   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem (1708 bytes)
	I1029 08:47:38.499315   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem -> /usr/share/ca-certificates/4550.pem
	I1029 08:47:38.499335   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> /usr/share/ca-certificates/45502.pem
	I1029 08:47:38.499349   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:47:38.499410   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:47:38.516805   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836/id_rsa Username:docker}
	I1029 08:47:38.612647   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1029 08:47:38.616561   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1029 08:47:38.624748   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1029 08:47:38.628258   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1029 08:47:38.637180   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1029 08:47:38.640891   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1029 08:47:38.650214   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1029 08:47:38.653972   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1029 08:47:38.662619   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1029 08:47:38.666317   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1029 08:47:38.674366   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1029 08:47:38.678199   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1029 08:47:38.686306   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 08:47:38.706856   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 08:47:38.724221   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 08:47:38.741317   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1029 08:47:38.759079   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1029 08:47:38.777104   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 08:47:38.794767   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 08:47:38.812149   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1029 08:47:38.830280   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem --> /usr/share/ca-certificates/4550.pem (1338 bytes)
	I1029 08:47:38.849527   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /usr/share/ca-certificates/45502.pem (1708 bytes)
	I1029 08:47:38.870347   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 08:47:38.890190   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1029 08:47:38.904271   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1029 08:47:38.917479   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1029 08:47:38.930520   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1029 08:47:38.945717   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1029 08:47:38.959276   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1029 08:47:38.972479   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1029 08:47:38.985067   51643 ssh_runner.go:195] Run: openssl version
	I1029 08:47:38.991454   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4550.pem && ln -fs /usr/share/ca-certificates/4550.pem /etc/ssl/certs/4550.pem"
	I1029 08:47:38.999996   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4550.pem
	I1029 08:47:39.004703   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:27 /usr/share/ca-certificates/4550.pem
	I1029 08:47:39.004780   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4550.pem
	I1029 08:47:39.050207   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4550.pem /etc/ssl/certs/51391683.0"
	I1029 08:47:39.058997   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/45502.pem && ln -fs /usr/share/ca-certificates/45502.pem /etc/ssl/certs/45502.pem"
	I1029 08:47:39.067821   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/45502.pem
	I1029 08:47:39.071762   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:27 /usr/share/ca-certificates/45502.pem
	I1029 08:47:39.071826   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/45502.pem
	I1029 08:47:39.113725   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/45502.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 08:47:39.121907   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 08:47:39.130312   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:47:39.134430   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:47:39.134513   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:47:39.176116   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 08:47:39.184143   51643 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 08:47:39.188071   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1029 08:47:39.229804   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1029 08:47:39.271125   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1029 08:47:39.314420   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1029 08:47:39.358357   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1029 08:47:39.404199   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1029 08:47:39.450657   51643 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1029 08:47:39.450775   51643 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-894836-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-894836 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 08:47:39.450808   51643 kube-vip.go:115] generating kube-vip config ...
	I1029 08:47:39.450861   51643 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1029 08:47:39.462795   51643 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1029 08:47:39.462879   51643 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1029 08:47:39.462977   51643 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 08:47:39.471222   51643 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 08:47:39.471296   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1029 08:47:39.480280   51643 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1029 08:47:39.493347   51643 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 08:47:39.506856   51643 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1029 08:47:39.521570   51643 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1029 08:47:39.525461   51643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:47:39.536266   51643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:47:39.680061   51643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 08:47:39.694883   51643 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 08:47:39.695320   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:47:39.699488   51643 out.go:179] * Verifying Kubernetes components...
	I1029 08:47:39.702679   51643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:47:39.837549   51643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 08:47:39.854606   51643 kapi.go:59] client config for ha-894836: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.crt", KeyFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.key", CAFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1029 08:47:39.854679   51643 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1029 08:47:39.854929   51643 node_ready.go:35] waiting up to 6m0s for node "ha-894836-m02" to be "Ready" ...
	W1029 08:47:49.857769   51643 node_ready.go:55] error getting node "ha-894836-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-894836-m02": net/http: TLS handshake timeout
	I1029 08:47:52.860254   51643 node_ready.go:49] node "ha-894836-m02" is "Ready"
	I1029 08:47:52.860290   51643 node_ready.go:38] duration metric: took 13.005340499s for node "ha-894836-m02" to be "Ready" ...
	I1029 08:47:52.860304   51643 api_server.go:52] waiting for apiserver process to appear ...
	I1029 08:47:52.860384   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:53.361211   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:53.860507   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:54.360916   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:54.860446   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:55.361159   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:55.860486   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:56.361306   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:56.860828   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:57.360541   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:57.860525   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:58.361238   51643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:47:58.374939   51643 api_server.go:72] duration metric: took 18.680010468s to wait for apiserver process to appear ...
	I1029 08:47:58.374971   51643 api_server.go:88] waiting for apiserver healthz status ...
	I1029 08:47:58.374992   51643 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1029 08:47:58.386476   51643 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1029 08:47:58.388170   51643 api_server.go:141] control plane version: v1.34.1
	I1029 08:47:58.388195   51643 api_server.go:131] duration metric: took 13.217297ms to wait for apiserver health ...
	I1029 08:47:58.388204   51643 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 08:47:58.397073   51643 system_pods.go:59] 26 kube-system pods found
	I1029 08:47:58.397155   51643 system_pods.go:61] "coredns-66bc5c9577-hhhxx" [e56e0269-e45a-43e3-a77e-177a0a756b40] Running
	I1029 08:47:58.397179   51643 system_pods.go:61] "coredns-66bc5c9577-vcp67" [f0f6bb79-544e-4586-aef9-3a82b1c78ecc] Running
	I1029 08:47:58.397217   51643 system_pods.go:61] "etcd-ha-894836" [5cd4d1f7-1dcb-4100-a31e-208ccc817ea3] Running
	I1029 08:47:58.397245   51643 system_pods.go:61] "etcd-ha-894836-m02" [2a90d177-9fd1-49e1-8c1e-79e3a1b5c413] Running
	I1029 08:47:58.397271   51643 system_pods.go:61] "etcd-ha-894836-m03" [6cd41576-e310-4635-9b94-f2d09bfe4222] Running
	I1029 08:47:58.397328   51643 system_pods.go:61] "kindnet-bjfp7" [dea5c187-a5ec-4d74-90e1-bf9c51c5bb6f] Running
	I1029 08:47:58.397356   51643 system_pods.go:61] "kindnet-hg69g" [8938d12e-502d-4a8c-84a5-018253ac53ba] Running
	I1029 08:47:58.397405   51643 system_pods.go:61] "kindnet-q8tvb" [1da0da6b-7d7f-45c0-9dab-afd839431062] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1029 08:47:58.397432   51643 system_pods.go:61] "kindnet-qkxpk" [a5470a24-fa80-424b-b421-001526b2593b] Running
	I1029 08:47:58.397457   51643 system_pods.go:61] "kube-apiserver-ha-894836" [b94cee38-e526-4d61-a186-f91144703115] Running
	I1029 08:47:58.397494   51643 system_pods.go:61] "kube-apiserver-ha-894836-m02" [c3caf692-d34f-4888-a75f-456b448a2676] Running
	I1029 08:47:58.397520   51643 system_pods.go:61] "kube-apiserver-ha-894836-m03" [8c8e2229-e880-40d7-824c-cb83b74bb8f5] Running
	I1029 08:47:58.397554   51643 system_pods.go:61] "kube-controller-manager-ha-894836" [310aa2d6-f3db-4980-bd00-c377cfdc9246] Running
	I1029 08:47:58.397597   51643 system_pods.go:61] "kube-controller-manager-ha-894836-m02" [d0f22e91-0e21-46b7-b40c-4b6837e3595f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 08:47:58.397620   51643 system_pods.go:61] "kube-controller-manager-ha-894836-m03" [455529ad-15de-4b00-b3f8-389c14c89a53] Running
	I1029 08:47:58.397668   51643 system_pods.go:61] "kube-proxy-59nqf" [849e97d0-893f-428e-9146-cd4ddf60b718] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1029 08:47:58.397697   51643 system_pods.go:61] "kube-proxy-bprsj" [927e6e10-9052-4c58-8eee-98a7e1c134dc] Running
	I1029 08:47:58.397724   51643 system_pods.go:61] "kube-proxy-gd8g6" [cbfb04f1-2bc7-4683-b99f-079f27c7b5e2] Running
	I1029 08:47:58.397756   51643 system_pods.go:61] "kube-proxy-gxrz7" [b0ef623f-f7ad-4b5a-8d1e-b08dc6d1ce80] Running
	I1029 08:47:58.397780   51643 system_pods.go:61] "kube-scheduler-ha-894836" [da7be70f-32ae-474c-a25a-a4e7a6e02653] Running
	I1029 08:47:58.397802   51643 system_pods.go:61] "kube-scheduler-ha-894836-m02" [cd22d36a-aab6-49ba-bbad-376526393820] Running
	I1029 08:47:58.397842   51643 system_pods.go:61] "kube-scheduler-ha-894836-m03" [5c88adc4-d9d3-42d1-aac9-550c356f755f] Running
	I1029 08:47:58.397867   51643 system_pods.go:61] "kube-vip-ha-894836" [3304e5b5-10a5-4362-855f-966f12e19513] Running
	I1029 08:47:58.397978   51643 system_pods.go:61] "kube-vip-ha-894836-m02" [79aaa612-a92e-4c41-a92a-c4bc904d64b2] Running
	I1029 08:47:58.398003   51643 system_pods.go:61] "kube-vip-ha-894836-m03" [1ce7bac8-8c0a-41fc-9cc9-db0417bd4da7] Running
	I1029 08:47:58.398030   51643 system_pods.go:61] "storage-provisioner" [74a003fb-b5cc-4ffa-8560-fd41d1257bd6] Running
	I1029 08:47:58.398069   51643 system_pods.go:74] duration metric: took 9.856974ms to wait for pod list to return data ...
	I1029 08:47:58.398098   51643 default_sa.go:34] waiting for default service account to be created ...
	I1029 08:47:58.402325   51643 default_sa.go:45] found service account: "default"
	I1029 08:47:58.402401   51643 default_sa.go:55] duration metric: took 4.283713ms for default service account to be created ...
	I1029 08:47:58.402426   51643 system_pods.go:116] waiting for k8s-apps to be running ...
	I1029 08:47:58.411486   51643 system_pods.go:86] 26 kube-system pods found
	I1029 08:47:58.411568   51643 system_pods.go:89] "coredns-66bc5c9577-hhhxx" [e56e0269-e45a-43e3-a77e-177a0a756b40] Running
	I1029 08:47:58.411592   51643 system_pods.go:89] "coredns-66bc5c9577-vcp67" [f0f6bb79-544e-4586-aef9-3a82b1c78ecc] Running
	I1029 08:47:58.411631   51643 system_pods.go:89] "etcd-ha-894836" [5cd4d1f7-1dcb-4100-a31e-208ccc817ea3] Running
	I1029 08:47:58.411661   51643 system_pods.go:89] "etcd-ha-894836-m02" [2a90d177-9fd1-49e1-8c1e-79e3a1b5c413] Running
	I1029 08:47:58.411686   51643 system_pods.go:89] "etcd-ha-894836-m03" [6cd41576-e310-4635-9b94-f2d09bfe4222] Running
	I1029 08:47:58.411725   51643 system_pods.go:89] "kindnet-bjfp7" [dea5c187-a5ec-4d74-90e1-bf9c51c5bb6f] Running
	I1029 08:47:58.411755   51643 system_pods.go:89] "kindnet-hg69g" [8938d12e-502d-4a8c-84a5-018253ac53ba] Running
	I1029 08:47:58.411785   51643 system_pods.go:89] "kindnet-q8tvb" [1da0da6b-7d7f-45c0-9dab-afd839431062] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1029 08:47:58.411826   51643 system_pods.go:89] "kindnet-qkxpk" [a5470a24-fa80-424b-b421-001526b2593b] Running
	I1029 08:47:58.411849   51643 system_pods.go:89] "kube-apiserver-ha-894836" [b94cee38-e526-4d61-a186-f91144703115] Running
	I1029 08:47:58.411887   51643 system_pods.go:89] "kube-apiserver-ha-894836-m02" [c3caf692-d34f-4888-a75f-456b448a2676] Running
	I1029 08:47:58.411913   51643 system_pods.go:89] "kube-apiserver-ha-894836-m03" [8c8e2229-e880-40d7-824c-cb83b74bb8f5] Running
	I1029 08:47:58.411942   51643 system_pods.go:89] "kube-controller-manager-ha-894836" [310aa2d6-f3db-4980-bd00-c377cfdc9246] Running
	I1029 08:47:58.411982   51643 system_pods.go:89] "kube-controller-manager-ha-894836-m02" [d0f22e91-0e21-46b7-b40c-4b6837e3595f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 08:47:58.412004   51643 system_pods.go:89] "kube-controller-manager-ha-894836-m03" [455529ad-15de-4b00-b3f8-389c14c89a53] Running
	I1029 08:47:58.412046   51643 system_pods.go:89] "kube-proxy-59nqf" [849e97d0-893f-428e-9146-cd4ddf60b718] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1029 08:47:58.412074   51643 system_pods.go:89] "kube-proxy-bprsj" [927e6e10-9052-4c58-8eee-98a7e1c134dc] Running
	I1029 08:47:58.412099   51643 system_pods.go:89] "kube-proxy-gd8g6" [cbfb04f1-2bc7-4683-b99f-079f27c7b5e2] Running
	I1029 08:47:58.412131   51643 system_pods.go:89] "kube-proxy-gxrz7" [b0ef623f-f7ad-4b5a-8d1e-b08dc6d1ce80] Running
	I1029 08:47:58.412157   51643 system_pods.go:89] "kube-scheduler-ha-894836" [da7be70f-32ae-474c-a25a-a4e7a6e02653] Running
	I1029 08:47:58.412180   51643 system_pods.go:89] "kube-scheduler-ha-894836-m02" [cd22d36a-aab6-49ba-bbad-376526393820] Running
	I1029 08:47:58.412217   51643 system_pods.go:89] "kube-scheduler-ha-894836-m03" [5c88adc4-d9d3-42d1-aac9-550c356f755f] Running
	I1029 08:47:58.412244   51643 system_pods.go:89] "kube-vip-ha-894836" [3304e5b5-10a5-4362-855f-966f12e19513] Running
	I1029 08:47:58.412269   51643 system_pods.go:89] "kube-vip-ha-894836-m02" [79aaa612-a92e-4c41-a92a-c4bc904d64b2] Running
	I1029 08:47:58.412360   51643 system_pods.go:89] "kube-vip-ha-894836-m03" [1ce7bac8-8c0a-41fc-9cc9-db0417bd4da7] Running
	I1029 08:47:58.412396   51643 system_pods.go:89] "storage-provisioner" [74a003fb-b5cc-4ffa-8560-fd41d1257bd6] Running
	I1029 08:47:58.412419   51643 system_pods.go:126] duration metric: took 9.970092ms to wait for k8s-apps to be running ...
	I1029 08:47:58.412443   51643 system_svc.go:44] waiting for kubelet service to be running ....
	I1029 08:47:58.412532   51643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 08:47:58.430648   51643 system_svc.go:56] duration metric: took 18.183914ms WaitForService to wait for kubelet
	I1029 08:47:58.430727   51643 kubeadm.go:587] duration metric: took 18.735792001s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 08:47:58.430763   51643 node_conditions.go:102] verifying NodePressure condition ...
	I1029 08:47:58.435505   51643 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1029 08:47:58.435585   51643 node_conditions.go:123] node cpu capacity is 2
	I1029 08:47:58.435615   51643 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1029 08:47:58.435636   51643 node_conditions.go:123] node cpu capacity is 2
	I1029 08:47:58.435667   51643 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1029 08:47:58.435691   51643 node_conditions.go:123] node cpu capacity is 2
	I1029 08:47:58.435709   51643 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1029 08:47:58.435750   51643 node_conditions.go:123] node cpu capacity is 2
	I1029 08:47:58.435776   51643 node_conditions.go:105] duration metric: took 4.978006ms to run NodePressure ...
	I1029 08:47:58.435804   51643 start.go:242] waiting for startup goroutines ...
	I1029 08:47:58.435853   51643 start.go:256] writing updated cluster config ...
	I1029 08:47:58.439739   51643 out.go:203] 
	I1029 08:47:58.443690   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:47:58.443882   51643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/config.json ...
	I1029 08:47:58.447597   51643 out.go:179] * Starting "ha-894836-m03" control-plane node in "ha-894836" cluster
	I1029 08:47:58.451296   51643 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 08:47:58.454468   51643 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 08:47:58.457455   51643 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:47:58.457578   51643 cache.go:59] Caching tarball of preloaded images
	I1029 08:47:58.457532   51643 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 08:47:58.457963   51643 preload.go:233] Found /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1029 08:47:58.457997   51643 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 08:47:58.458193   51643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/config.json ...
	I1029 08:47:58.484925   51643 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 08:47:58.484945   51643 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 08:47:58.484957   51643 cache.go:233] Successfully downloaded all kic artifacts
	I1029 08:47:58.484981   51643 start.go:360] acquireMachinesLock for ha-894836-m03: {Name:mkff6279e1eccd0127b32c0d6857db9b3fa3dac9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 08:47:58.485031   51643 start.go:364] duration metric: took 36.152µs to acquireMachinesLock for "ha-894836-m03"
	I1029 08:47:58.485050   51643 start.go:96] Skipping create...Using existing machine configuration
	I1029 08:47:58.485055   51643 fix.go:54] fixHost starting: m03
	I1029 08:47:58.485336   51643 cli_runner.go:164] Run: docker container inspect ha-894836-m03 --format={{.State.Status}}
	I1029 08:47:58.517723   51643 fix.go:112] recreateIfNeeded on ha-894836-m03: state=Stopped err=<nil>
	W1029 08:47:58.517747   51643 fix.go:138] unexpected machine state, will restart: <nil>
	I1029 08:47:58.521056   51643 out.go:252] * Restarting existing docker container for "ha-894836-m03" ...
	I1029 08:47:58.521146   51643 cli_runner.go:164] Run: docker start ha-894836-m03
	I1029 08:47:58.923330   51643 cli_runner.go:164] Run: docker container inspect ha-894836-m03 --format={{.State.Status}}
	I1029 08:47:58.955597   51643 kic.go:430] container "ha-894836-m03" state is running.
	I1029 08:47:58.955975   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836-m03
	I1029 08:47:58.985436   51643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/config.json ...
	I1029 08:47:58.985727   51643 machine.go:94] provisionDockerMachine start ...
	I1029 08:47:58.985800   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:47:59.021071   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:47:59.021382   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1029 08:47:59.021392   51643 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 08:47:59.022242   51643 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1029 08:48:02.369899   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-894836-m03
	
	I1029 08:48:02.369983   51643 ubuntu.go:182] provisioning hostname "ha-894836-m03"
	I1029 08:48:02.370089   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:48:02.396111   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:48:02.396431   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1029 08:48:02.396444   51643 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-894836-m03 && echo "ha-894836-m03" | sudo tee /etc/hostname
	I1029 08:48:02.706986   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-894836-m03
	
	I1029 08:48:02.707060   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:48:02.732902   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:48:02.733206   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1029 08:48:02.733231   51643 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-894836-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-894836-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-894836-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 08:48:03.018167   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 08:48:03.018188   51643 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-2763/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-2763/.minikube}
	I1029 08:48:03.018211   51643 ubuntu.go:190] setting up certificates
	I1029 08:48:03.018221   51643 provision.go:84] configureAuth start
	I1029 08:48:03.018284   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836-m03
	I1029 08:48:03.051408   51643 provision.go:143] copyHostCerts
	I1029 08:48:03.051450   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 08:48:03.051486   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem, removing ...
	I1029 08:48:03.051493   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 08:48:03.051568   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem (1082 bytes)
	I1029 08:48:03.051644   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 08:48:03.051661   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem, removing ...
	I1029 08:48:03.051666   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 08:48:03.051690   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem (1123 bytes)
	I1029 08:48:03.051728   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 08:48:03.051744   51643 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem, removing ...
	I1029 08:48:03.051748   51643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 08:48:03.051770   51643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem (1679 bytes)
	I1029 08:48:03.051815   51643 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem org=jenkins.ha-894836-m03 san=[127.0.0.1 192.168.49.4 ha-894836-m03 localhost minikube]
	I1029 08:48:04.283916   51643 provision.go:177] copyRemoteCerts
	I1029 08:48:04.283985   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 08:48:04.284031   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:48:04.301428   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m03/id_rsa Username:docker}
	I1029 08:48:04.461287   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1029 08:48:04.461367   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 08:48:04.496816   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1029 08:48:04.496881   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1029 08:48:04.527177   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1029 08:48:04.527250   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 08:48:04.556555   51643 provision.go:87] duration metric: took 1.5383197s to configureAuth
	I1029 08:48:04.556585   51643 ubuntu.go:206] setting minikube options for container-runtime
	I1029 08:48:04.556817   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:48:04.556919   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:48:04.581700   51643 main.go:143] libmachine: Using SSH client type: native
	I1029 08:48:04.581999   51643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1029 08:48:04.582018   51643 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 08:48:05.181543   51643 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 08:48:05.181567   51643 machine.go:97] duration metric: took 6.195829937s to provisionDockerMachine
	I1029 08:48:05.181589   51643 start.go:293] postStartSetup for "ha-894836-m03" (driver="docker")
	I1029 08:48:05.181600   51643 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 08:48:05.181674   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 08:48:05.181722   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:48:05.207592   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m03/id_rsa Username:docker}
	I1029 08:48:05.322834   51643 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 08:48:05.327694   51643 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 08:48:05.327775   51643 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 08:48:05.327808   51643 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/addons for local assets ...
	I1029 08:48:05.327899   51643 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/files for local assets ...
	I1029 08:48:05.328050   51643 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> 45502.pem in /etc/ssl/certs
	I1029 08:48:05.328079   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> /etc/ssl/certs/45502.pem
	I1029 08:48:05.328256   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 08:48:05.343080   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /etc/ssl/certs/45502.pem (1708 bytes)
	I1029 08:48:05.371323   51643 start.go:296] duration metric: took 189.718932ms for postStartSetup
	I1029 08:48:05.371417   51643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 08:48:05.371455   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:48:05.397947   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m03/id_rsa Username:docker}
	I1029 08:48:05.541458   51643 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 08:48:05.561976   51643 fix.go:56] duration metric: took 7.076913817s for fixHost
	I1029 08:48:05.562004   51643 start.go:83] releasing machines lock for "ha-894836-m03", held for 7.076964665s
	I1029 08:48:05.562072   51643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836-m03
	I1029 08:48:05.600883   51643 out.go:179] * Found network options:
	I1029 08:48:05.604417   51643 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1029 08:48:05.607757   51643 proxy.go:120] fail to check proxy env: Error ip not in block
	W1029 08:48:05.607793   51643 proxy.go:120] fail to check proxy env: Error ip not in block
	W1029 08:48:05.607816   51643 proxy.go:120] fail to check proxy env: Error ip not in block
	W1029 08:48:05.607826   51643 proxy.go:120] fail to check proxy env: Error ip not in block
	I1029 08:48:05.607887   51643 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 08:48:05.607928   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:48:05.607983   51643 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 08:48:05.608041   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:48:05.654947   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m03/id_rsa Username:docker}
	I1029 08:48:05.658008   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m03/id_rsa Username:docker}
	I1029 08:48:06.130162   51643 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 08:48:06.143305   51643 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 08:48:06.143421   51643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 08:48:06.167460   51643 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1029 08:48:06.167489   51643 start.go:496] detecting cgroup driver to use...
	I1029 08:48:06.167523   51643 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1029 08:48:06.167572   51643 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 08:48:06.213970   51643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 08:48:06.251029   51643 docker.go:218] disabling cri-docker service (if available) ...
	I1029 08:48:06.251087   51643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 08:48:06.290080   51643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 08:48:06.327709   51643 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 08:48:06.726326   51643 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 08:48:07.139091   51643 docker.go:234] disabling docker service ...
	I1029 08:48:07.139182   51643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 08:48:07.178202   51643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 08:48:07.209433   51643 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 08:48:07.608392   51643 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 08:48:08.086947   51643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 08:48:08.121769   51643 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 08:48:08.184236   51643 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 08:48:08.184326   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:48:08.215828   51643 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1029 08:48:08.215914   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:48:08.238638   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:48:08.269033   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:48:08.295262   51643 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 08:48:08.331399   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:48:08.356819   51643 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:48:08.389668   51643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:48:08.403860   51643 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 08:48:08.423244   51643 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 08:48:08.437579   51643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:48:08.832580   51643 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 08:49:39.275381   51643 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.442758035s)
	I1029 08:49:39.275412   51643 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 08:49:39.275483   51643 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 08:49:39.279771   51643 start.go:564] Will wait 60s for crictl version
	I1029 08:49:39.279855   51643 ssh_runner.go:195] Run: which crictl
	I1029 08:49:39.284759   51643 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 08:49:39.334853   51643 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 08:49:39.334984   51643 ssh_runner.go:195] Run: crio --version
	I1029 08:49:39.371804   51643 ssh_runner.go:195] Run: crio --version
	I1029 08:49:39.405984   51643 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 08:49:39.412429   51643 out.go:179]   - env NO_PROXY=192.168.49.2
	I1029 08:49:39.415504   51643 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1029 08:49:39.418469   51643 cli_runner.go:164] Run: docker network inspect ha-894836 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 08:49:39.435673   51643 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1029 08:49:39.440794   51643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:49:39.451208   51643 mustload.go:66] Loading cluster: ha-894836
	I1029 08:49:39.451471   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:49:39.451781   51643 cli_runner.go:164] Run: docker container inspect ha-894836 --format={{.State.Status}}
	I1029 08:49:39.468915   51643 host.go:66] Checking if "ha-894836" exists ...
	I1029 08:49:39.469188   51643 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836 for IP: 192.168.49.4
	I1029 08:49:39.469202   51643 certs.go:195] generating shared ca certs ...
	I1029 08:49:39.469216   51643 certs.go:227] acquiring lock for ca certs: {Name:mk715611293338c39436fe9072c5ce0c2fb993a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:49:39.469334   51643 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key
	I1029 08:49:39.469401   51643 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key
	I1029 08:49:39.469413   51643 certs.go:257] generating profile certs ...
	I1029 08:49:39.469489   51643 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.key
	I1029 08:49:39.469559   51643 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key.761eb988
	I1029 08:49:39.469601   51643 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key
	I1029 08:49:39.469613   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1029 08:49:39.469625   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1029 08:49:39.469641   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1029 08:49:39.469654   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1029 08:49:39.469666   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1029 08:49:39.469679   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1029 08:49:39.469694   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1029 08:49:39.469705   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1029 08:49:39.469761   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem (1338 bytes)
	W1029 08:49:39.469793   51643 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550_empty.pem, impossibly tiny 0 bytes
	I1029 08:49:39.469805   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem (1679 bytes)
	I1029 08:49:39.469829   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem (1082 bytes)
	I1029 08:49:39.469858   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem (1123 bytes)
	I1029 08:49:39.469887   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem (1679 bytes)
	I1029 08:49:39.469934   51643 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem (1708 bytes)
	I1029 08:49:39.469964   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> /usr/share/ca-certificates/45502.pem
	I1029 08:49:39.469983   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:49:39.469994   51643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem -> /usr/share/ca-certificates/4550.pem
	I1029 08:49:39.470057   51643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:49:39.488996   51643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836/id_rsa Username:docker}
	I1029 08:49:39.588688   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1029 08:49:39.592443   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1029 08:49:39.600773   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1029 08:49:39.604466   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1029 08:49:39.613528   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1029 08:49:39.617112   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1029 08:49:39.625577   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1029 08:49:39.629278   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1029 08:49:39.637493   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1029 08:49:39.641121   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1029 08:49:39.650070   51643 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1029 08:49:39.653954   51643 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1029 08:49:39.662931   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 08:49:39.685107   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 08:49:39.705459   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 08:49:39.724858   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1029 08:49:39.743556   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1029 08:49:39.762456   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 08:49:39.781042   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 08:49:39.803894   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1029 08:49:39.827899   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /usr/share/ca-certificates/45502.pem (1708 bytes)
	I1029 08:49:39.848693   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 08:49:39.875006   51643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem --> /usr/share/ca-certificates/4550.pem (1338 bytes)
	I1029 08:49:39.895980   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1029 08:49:39.909585   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1029 08:49:39.922536   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1029 08:49:39.935718   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1029 08:49:39.950308   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1029 08:49:39.965160   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1029 08:49:39.979271   51643 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1029 08:49:39.992671   51643 ssh_runner.go:195] Run: openssl version
	I1029 08:49:39.999106   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/45502.pem && ln -fs /usr/share/ca-certificates/45502.pem /etc/ssl/certs/45502.pem"
	I1029 08:49:40.009754   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/45502.pem
	I1029 08:49:40.016736   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:27 /usr/share/ca-certificates/45502.pem
	I1029 08:49:40.016877   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/45502.pem
	I1029 08:49:40.067934   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/45502.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 08:49:40.077186   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 08:49:40.086864   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:49:40.091154   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:49:40.091257   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:49:40.134215   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 08:49:40.142049   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4550.pem && ln -fs /usr/share/ca-certificates/4550.pem /etc/ssl/certs/4550.pem"
	I1029 08:49:40.150815   51643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4550.pem
	I1029 08:49:40.154732   51643 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:27 /usr/share/ca-certificates/4550.pem
	I1029 08:49:40.154796   51643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4550.pem
	I1029 08:49:40.196358   51643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4550.pem /etc/ssl/certs/51391683.0"
	I1029 08:49:40.204753   51643 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 08:49:40.208825   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1029 08:49:40.251130   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1029 08:49:40.293659   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1029 08:49:40.335303   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1029 08:49:40.378403   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1029 08:49:40.419111   51643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1029 08:49:40.459947   51643 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1029 08:49:40.460045   51643 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-894836-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-894836 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 08:49:40.460074   51643 kube-vip.go:115] generating kube-vip config ...
	I1029 08:49:40.460122   51643 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1029 08:49:40.472263   51643 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1029 08:49:40.472402   51643 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1029 08:49:40.472491   51643 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 08:49:40.482442   51643 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 08:49:40.482527   51643 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1029 08:49:40.491244   51643 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1029 08:49:40.509334   51643 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 08:49:40.522741   51643 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1029 08:49:40.543511   51643 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1029 08:49:40.549027   51643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:49:40.559626   51643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:49:40.700906   51643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 08:49:40.716131   51643 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 08:49:40.716494   51643 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:49:40.720440   51643 out.go:179] * Verifying Kubernetes components...
	I1029 08:49:40.723093   51643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:49:40.849270   51643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 08:49:40.870801   51643 kapi.go:59] client config for ha-894836: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.crt", KeyFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/ha-894836/client.key", CAFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1029 08:49:40.870875   51643 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1029 08:49:40.871137   51643 node_ready.go:35] waiting up to 6m0s for node "ha-894836-m03" to be "Ready" ...
	W1029 08:49:42.878542   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:49:45.376167   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:49:47.875546   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:49:49.879197   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:49:52.374859   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:49:54.874674   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:49:56.875642   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:49:59.385971   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:01.874925   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:04.375281   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:06.875417   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:08.877527   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:11.374735   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:13.374773   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:15.875423   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:18.374307   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:20.375009   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:22.875458   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:24.875734   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:27.374436   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:29.375591   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:31.875678   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:33.876408   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:36.375279   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:38.875405   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:40.875687   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:43.375139   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:45.376751   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:47.874681   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:50.375198   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:52.874746   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:54.875461   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:57.374875   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:50:59.375081   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:01.874956   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:03.875571   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:05.875856   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:07.875956   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:10.374910   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:12.375300   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:14.874455   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:16.874501   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:18.881741   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:21.374575   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:23.375182   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:25.875630   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:28.375397   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:30.376726   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:32.874952   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:35.375371   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:37.875672   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:40.374584   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:42.375166   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:44.375299   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:46.875496   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:48.876305   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:51.375111   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:53.375554   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:55.874828   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:51:58.374446   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:00.391777   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:02.875635   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:05.374696   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:07.875548   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:10.374764   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:12.375076   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:14.874580   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:16.875240   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:18.880605   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:21.375072   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:23.875108   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:26.375196   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:28.375284   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:30.875177   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:32.875570   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:35.374573   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:37.374747   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:39.375982   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:41.875595   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:44.377104   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:46.875402   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:48.877198   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:51.375357   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:53.874734   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:55.875011   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:52:57.875521   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:00.380590   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:02.876012   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:05.375714   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:07.875383   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:10.374415   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:12.376491   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:14.875713   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:17.375204   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:19.377537   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:21.877439   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:24.375155   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:26.874635   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:28.881623   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:31.374848   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:33.374930   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:35.875771   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:38.375835   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:40.875765   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:43.375167   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:45.874879   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:47.878546   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:50.375661   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:52.875435   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:55.375646   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:57.874489   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:53:59.875624   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:02.375174   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:04.874940   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:07.375497   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:09.875063   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:11.875223   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:13.875266   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:16.378660   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:18.883945   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:21.374606   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:23.376495   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:25.875496   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:28.375564   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:30.875734   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:33.375292   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:35.875496   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:38.375495   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:40.874844   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:42.874893   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:45.376206   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:47.875511   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:50.375400   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:52.875571   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:55.374747   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:57.374957   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:54:59.375343   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:01.876012   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:04.374336   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:06.374603   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:08.875609   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:11.375178   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:13.375447   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:15.376425   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:17.874841   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:20.375318   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:22.874543   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:25.375289   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:27.874901   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:30.374710   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:32.375028   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:34.375632   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:36.875017   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	W1029 08:55:38.877472   51643 node_ready.go:57] node "ha-894836-m03" has "Ready":"Unknown" status (will retry)
	I1029 08:55:40.871415   51643 node_ready.go:38] duration metric: took 6m0.000252794s for node "ha-894836-m03" to be "Ready" ...
	I1029 08:55:40.874909   51643 out.go:203] 
	W1029 08:55:40.877827   51643 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1029 08:55:40.877849   51643 out.go:285] * 
	W1029 08:55:40.880012   51643 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:55:40.882934   51643 out.go:203] 
	
	
	==> CRI-O <==
	Oct 29 08:48:31 ha-894836 crio[665]: time="2025-10-29T08:48:31.405473293Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=32150040-13c5-4993-9d53-1d8c8b936dae name=/runtime.v1.ImageService/ImageStatus
	Oct 29 08:48:31 ha-894836 crio[665]: time="2025-10-29T08:48:31.406558171Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=f533fe3b-c6cb-4daf-8190-4ca198dc0664 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 08:48:31 ha-894836 crio[665]: time="2025-10-29T08:48:31.406654286Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 08:48:31 ha-894836 crio[665]: time="2025-10-29T08:48:31.411556435Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 08:48:31 ha-894836 crio[665]: time="2025-10-29T08:48:31.412037753Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5eff9b5708aaba3e35120e5c17dfcd8d88e7135226bba9538b85d1bdd299f814/merged/etc/passwd: no such file or directory"
	Oct 29 08:48:31 ha-894836 crio[665]: time="2025-10-29T08:48:31.41219942Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5eff9b5708aaba3e35120e5c17dfcd8d88e7135226bba9538b85d1bdd299f814/merged/etc/group: no such file or directory"
	Oct 29 08:48:31 ha-894836 crio[665]: time="2025-10-29T08:48:31.412764686Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 08:48:31 ha-894836 crio[665]: time="2025-10-29T08:48:31.435659025Z" level=info msg="Created container 3d37627bfbc5fda963a0c849ee3de0fd939c938a1ae880f8853db63e9ec5b57b: kube-system/storage-provisioner/storage-provisioner" id=f533fe3b-c6cb-4daf-8190-4ca198dc0664 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 08:48:31 ha-894836 crio[665]: time="2025-10-29T08:48:31.436586896Z" level=info msg="Starting container: 3d37627bfbc5fda963a0c849ee3de0fd939c938a1ae880f8853db63e9ec5b57b" id=d8955627-909b-475a-944e-ac1a3b5d4e96 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 08:48:31 ha-894836 crio[665]: time="2025-10-29T08:48:31.43975017Z" level=info msg="Started container" PID=1368 containerID=3d37627bfbc5fda963a0c849ee3de0fd939c938a1ae880f8853db63e9ec5b57b description=kube-system/storage-provisioner/storage-provisioner id=d8955627-909b-475a-944e-ac1a3b5d4e96 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c6058cbca67d071839a960a649f1de901cec31652fc327f56667100a324eb7e5
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.916829212Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.920294611Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.920371732Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.920393944Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.926141566Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.926179974Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.92620596Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.930512623Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.930548259Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.930572817Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.934035459Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.934075393Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.934102561Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.937441337Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 08:48:40 ha-894836 crio[665]: time="2025-10-29T08:48:40.937480057Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	3d37627bfbc5f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Running             storage-provisioner       2                   c6058cbca67d0       storage-provisioner                 kube-system
	7e6beb43bb335       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   9aa14b66630e2       coredns-66bc5c9577-hhhxx            kube-system
	69e1be8c137ed       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Exited              storage-provisioner       1                   c6058cbca67d0       storage-provisioner                 kube-system
	e7956795c58f4       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   7 minutes ago       Running             busybox                   1                   541c10c0d9e9d       busybox-7b57f96db7-hl8ll            default
	4ac7e4e48f2d6       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 minutes ago       Running             kube-proxy                1                   97069c7ad741e       kube-proxy-gxrz7                    kube-system
	b59e1fb940c3f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 minutes ago       Running             kindnet-cni               1                   662869c52a2c8       kindnet-bjfp7                       kube-system
	f4d98e59447db       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   fb9556b60baf7       coredns-66bc5c9577-vcp67            kube-system
	e00d3f78d68d9       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   8 minutes ago       Running             kube-apiserver            1                   27c7e21f538bd       kube-apiserver-ha-894836            kube-system
	a917c056972ea       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   8 minutes ago       Running             kube-vip                  0                   cb582940fcc64       kube-vip-ha-894836                  kube-system
	67e5abbb69757       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   8 minutes ago       Running             kube-scheduler            1                   615eac85d59b6       kube-scheduler-ha-894836            kube-system
	ffcbb54d6ce44       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   8 minutes ago       Running             kube-controller-manager   1                   3a2ab0bee942f       kube-controller-manager-ha-894836   kube-system
	c5012e77d5995       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   8 minutes ago       Running             etcd                      1                   0d7cccc011f06       etcd-ha-894836                      kube-system
	
	
	==> coredns [7e6beb43bb33582fbfaddc581b0968352916d1ba99aca6791d37ebb24f48a116] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34440 - 7065 "HINFO IN 8445725135211176428.1755746847705524405. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013494166s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f4d98e59447db0183f40bf805b64d3d4db57ead54fe530999384509e544cc7d9] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42938 - 8700 "HINFO IN 4442209450395311171.7481964028264372801. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023094613s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-894836
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-894836
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=ha-894836
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T08_41_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 08:41:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-894836
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 08:55:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 08:55:23 +0000   Wed, 29 Oct 2025 08:41:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 08:55:23 +0000   Wed, 29 Oct 2025 08:41:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 08:55:23 +0000   Wed, 29 Oct 2025 08:41:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 08:55:23 +0000   Wed, 29 Oct 2025 08:42:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-894836
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                cd4b1ccd-742f-4f33-9ae4-c8bc3e629f16
	  Boot ID:                    dcb5a4aa-84b0-4edf-aeb7-a96ccf6c0882
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-hl8ll             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-hhhxx             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 coredns-66bc5c9577-vcp67             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-ha-894836                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-bjfp7                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-894836             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-894836    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-gxrz7                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-894836             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-894836                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m56s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m53s                  kube-proxy       
	  Normal   Starting                 14m                    kube-proxy       
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     14m (x8 over 14m)      kubelet          Node ha-894836 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-894836 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-894836 status is now: NodeHasSufficientMemory
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-894836 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-894836 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-894836 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           14m                    node-controller  Node ha-894836 event: Registered Node ha-894836 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-894836 event: Registered Node ha-894836 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-894836 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-894836 event: Registered Node ha-894836 in Controller
	  Normal   RegisteredNode           8m56s                  node-controller  Node ha-894836 event: Registered Node ha-894836 in Controller
	  Normal   Starting                 8m27s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m27s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m26s (x8 over 8m27s)  kubelet          Node ha-894836 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m26s (x8 over 8m27s)  kubelet          Node ha-894836 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m26s (x8 over 8m27s)  kubelet          Node ha-894836 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m                     node-controller  Node ha-894836 event: Registered Node ha-894836 in Controller
	  Normal   RegisteredNode           7m45s                  node-controller  Node ha-894836 event: Registered Node ha-894836 in Controller
	
	
	Name:               ha-894836-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-894836-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=ha-894836
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_29T08_42_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 08:42:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-894836-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 08:55:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 08:55:46 +0000   Wed, 29 Oct 2025 08:42:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 08:55:46 +0000   Wed, 29 Oct 2025 08:42:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 08:55:46 +0000   Wed, 29 Oct 2025 08:42:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 08:55:46 +0000   Wed, 29 Oct 2025 08:43:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-894836-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                80b3d6bd-ca52-4282-b4dd-9a277fb019ad
	  Boot ID:                    dcb5a4aa-84b0-4edf-aeb7-a96ccf6c0882
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-fj895                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-894836-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-q8tvb                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-894836-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-894836-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-59nqf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-894836-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-894836-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m32s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   RegisteredNode           13m                    node-controller  Node ha-894836-m02 event: Registered Node ha-894836-m02 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-894836-m02 event: Registered Node ha-894836-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-894836-m02 event: Registered Node ha-894836-m02 in Controller
	  Normal   NodeHasSufficientPID     9m28s (x8 over 9m28s)  kubelet          Node ha-894836-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 9m28s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m28s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m28s (x8 over 9m28s)  kubelet          Node ha-894836-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m28s (x8 over 9m28s)  kubelet          Node ha-894836-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           8m56s                  node-controller  Node ha-894836-m02 event: Registered Node ha-894836-m02 in Controller
	  Normal   Starting                 8m22s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m22s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m22s (x8 over 8m22s)  kubelet          Node ha-894836-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m22s (x8 over 8m22s)  kubelet          Node ha-894836-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m22s (x8 over 8m22s)  kubelet          Node ha-894836-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m                     node-controller  Node ha-894836-m02 event: Registered Node ha-894836-m02 in Controller
	  Normal   RegisteredNode           7m45s                  node-controller  Node ha-894836-m02 event: Registered Node ha-894836-m02 in Controller
	
	
	Name:               ha-894836-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-894836-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=ha-894836
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_29T08_45_06_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 08:45:06 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-894836-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 08:46:48 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 29 Oct 2025 08:45:49 +0000   Wed, 29 Oct 2025 08:48:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 29 Oct 2025 08:45:49 +0000   Wed, 29 Oct 2025 08:48:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 29 Oct 2025 08:45:49 +0000   Wed, 29 Oct 2025 08:48:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 29 Oct 2025 08:45:49 +0000   Wed, 29 Oct 2025 08:48:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-894836-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                a6b33f47-a46d-4ce9-9424-db5d023a3b7c
	  Boot ID:                    dcb5a4aa-84b0-4edf-aeb7-a96ccf6c0882
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-hg69g       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-proxy-bprsj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-894836-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           10m                node-controller  Node ha-894836-m04 event: Registered Node ha-894836-m04 in Controller
	  Normal   NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-894836-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-894836-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           10m                node-controller  Node ha-894836-m04 event: Registered Node ha-894836-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-894836-m04 event: Registered Node ha-894836-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-894836-m04 status is now: NodeReady
	  Normal   RegisteredNode           8m56s              node-controller  Node ha-894836-m04 event: Registered Node ha-894836-m04 in Controller
	  Normal   RegisteredNode           8m                 node-controller  Node ha-894836-m04 event: Registered Node ha-894836-m04 in Controller
	  Normal   RegisteredNode           7m45s              node-controller  Node ha-894836-m04 event: Registered Node ha-894836-m04 in Controller
	  Normal   NodeNotReady             7m10s              node-controller  Node ha-894836-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Oct29 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014848] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.520802] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035216] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.815569] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.730396] kauditd_printk_skb: 36 callbacks suppressed
	[Oct29 08:19] kauditd_printk_skb: 8 callbacks suppressed
	[Oct29 08:21] overlayfs: idmapped layers are currently not supported
	[  +0.080642] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct29 08:26] overlayfs: idmapped layers are currently not supported
	[Oct29 08:27] overlayfs: idmapped layers are currently not supported
	[Oct29 08:41] overlayfs: idmapped layers are currently not supported
	[Oct29 08:42] overlayfs: idmapped layers are currently not supported
	[Oct29 08:43] overlayfs: idmapped layers are currently not supported
	[Oct29 08:45] overlayfs: idmapped layers are currently not supported
	[Oct29 08:46] overlayfs: idmapped layers are currently not supported
	[Oct29 08:47] overlayfs: idmapped layers are currently not supported
	[  +4.220383] overlayfs: idmapped layers are currently not supported
	[Oct29 08:48] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c5012e77d5995d67461a19df092ba7b0617af55e88a4f413560ffb01b7c5dd86] <==
	{"level":"warn","ts":"2025-10-29T08:55:37.786299Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"b0fdec051931967a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:37.786354Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"b0fdec051931967a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:40.231758Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"b0fdec051931967a","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:40.231798Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"b0fdec051931967a","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:41.787909Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"b0fdec051931967a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:41.787976Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"b0fdec051931967a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:45.232786Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"b0fdec051931967a","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:45.232724Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"b0fdec051931967a","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-29T08:55:45.481653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:58622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:55:45.501921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:58644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:55:45.509326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:58648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:55:45.520416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:58656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:55:45.534388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:58672","server-name":"","error":"read tcp 192.168.49.2:2379->192.168.49.4:58672: read: connection reset by peer"}
	{"level":"warn","ts":"2025-10-29T08:55:45.534768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:58674","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-29T08:55:45.549450Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892 13886940692718237272)"}
	{"level":"info","ts":"2025-10-29T08:55:45.551570Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"b0fdec051931967a","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-10-29T08:55:45.551618Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"b0fdec051931967a"}
	{"level":"info","ts":"2025-10-29T08:55:45.551652Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"b0fdec051931967a"}
	{"level":"info","ts":"2025-10-29T08:55:45.551684Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"b0fdec051931967a"}
	{"level":"info","ts":"2025-10-29T08:55:45.551702Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"b0fdec051931967a"}
	{"level":"info","ts":"2025-10-29T08:55:45.551735Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b0fdec051931967a"}
	{"level":"info","ts":"2025-10-29T08:55:45.551763Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b0fdec051931967a"}
	{"level":"info","ts":"2025-10-29T08:55:45.551793Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"b0fdec051931967a"}
	{"level":"info","ts":"2025-10-29T08:55:45.551805Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"b0fdec051931967a"}
	{"level":"warn","ts":"2025-10-29T08:55:45.556358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:58682","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:55:55 up 38 min,  0 user,  load average: 1.55, 1.64, 1.43
	Linux ha-894836 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b59e1fb940c3f6ad37293176d85dd63473e5ac8494b7819987c7064627f6d94c] <==
	I1029 08:55:20.917053       1 main.go:324] Node ha-894836-m03 has CIDR [10.244.2.0/24] 
	I1029 08:55:20.917112       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1029 08:55:20.917123       1 main.go:324] Node ha-894836-m04 has CIDR [10.244.3.0/24] 
	I1029 08:55:30.916442       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:55:30.916483       1 main.go:301] handling current node
	I1029 08:55:30.916498       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1029 08:55:30.916503       1 main.go:324] Node ha-894836-m02 has CIDR [10.244.1.0/24] 
	I1029 08:55:30.916677       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1029 08:55:30.916691       1 main.go:324] Node ha-894836-m03 has CIDR [10.244.2.0/24] 
	I1029 08:55:30.916824       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1029 08:55:30.916838       1 main.go:324] Node ha-894836-m04 has CIDR [10.244.3.0/24] 
	I1029 08:55:40.923925       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1029 08:55:40.924058       1 main.go:324] Node ha-894836-m04 has CIDR [10.244.3.0/24] 
	I1029 08:55:40.924212       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:55:40.924253       1 main.go:301] handling current node
	I1029 08:55:40.924305       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1029 08:55:40.924368       1 main.go:324] Node ha-894836-m02 has CIDR [10.244.1.0/24] 
	I1029 08:55:40.924514       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1029 08:55:40.924552       1 main.go:324] Node ha-894836-m03 has CIDR [10.244.2.0/24] 
	I1029 08:55:50.916416       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:55:50.916453       1 main.go:301] handling current node
	I1029 08:55:50.916493       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1029 08:55:50.916501       1 main.go:324] Node ha-894836-m02 has CIDR [10.244.1.0/24] 
	I1029 08:55:50.916643       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1029 08:55:50.916658       1 main.go:324] Node ha-894836-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [e00d3f78d68d909f0332f199fdaf28199771c94a7e8d59cc954f4172c68c75fe] <==
	I1029 08:47:52.921966       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1029 08:47:52.919543       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1029 08:47:52.926729       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1029 08:47:52.926973       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1029 08:47:52.933488       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	W1029 08:47:52.938611       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1029 08:47:52.945598       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1029 08:47:52.946057       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1029 08:47:52.946083       1 policy_source.go:240] refreshing policies
	I1029 08:47:52.951298       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1029 08:47:52.951418       1 aggregator.go:171] initial CRD sync complete...
	I1029 08:47:52.951451       1 autoregister_controller.go:144] Starting autoregister controller
	I1029 08:47:52.951481       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1029 08:47:52.951508       1 cache.go:39] Caches are synced for autoregister controller
	I1029 08:47:52.977975       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 08:47:52.993034       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1029 08:47:53.040242       1 controller.go:667] quota admission added evaluator for: endpoints
	I1029 08:47:53.057186       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1029 08:47:53.065202       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1029 08:47:53.534383       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1029 08:47:53.979043       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1029 08:47:54.542753       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 08:47:59.474057       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1029 08:47:59.516146       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1029 08:47:59.654898       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [ffcbb54d6ce4436f5aec8bb9428ef3aa2b15fa9ee4079908fa14d7ee16acbc0c] <==
	I1029 08:47:55.875853       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1029 08:47:55.882183       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1029 08:47:55.882278       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1029 08:47:55.882348       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1029 08:47:55.883459       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1029 08:47:55.887670       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1029 08:47:55.890946       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1029 08:47:55.891049       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1029 08:47:55.892406       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1029 08:47:55.892502       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1029 08:47:55.892563       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-894836-m04"
	I1029 08:47:55.893176       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1029 08:47:55.894883       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1029 08:47:55.898545       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 08:47:55.898596       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1029 08:47:55.901253       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1029 08:47:55.901667       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1029 08:47:55.905025       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1029 08:47:55.905294       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1029 08:47:55.917000       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1029 08:48:42.390447       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-tqj79 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-tqj79\": the object has been modified; please apply your changes to the latest version and try again"
	I1029 08:48:42.392685       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"7aa7c40c-2de0-444b-84d5-38273baecd29", APIVersion:"v1", ResourceVersion:"311", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-tqj79 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-tqj79": the object has been modified; please apply your changes to the latest version and try again
	I1029 08:48:42.407658       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-tqj79 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-tqj79\": the object has been modified; please apply your changes to the latest version and try again"
	I1029 08:48:42.407815       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"7aa7c40c-2de0-444b-84d5-38273baecd29", APIVersion:"v1", ResourceVersion:"311", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-tqj79 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-tqj79": the object has been modified; please apply your changes to the latest version and try again
	I1029 08:53:56.021427       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-gmd49"
	
	
	==> kube-proxy [4ac7e4e48f2d67e6c26eb63b7aff7bf2e7c9e3065e9d277bfed197195815f419] <==
	I1029 08:48:00.832054       1 server_linux.go:53] "Using iptables proxy"
	I1029 08:48:01.014142       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 08:48:01.114385       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 08:48:01.114528       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1029 08:48:01.114683       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 08:48:01.305529       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 08:48:01.305578       1 server_linux.go:132] "Using iptables Proxier"
	I1029 08:48:01.412541       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 08:48:01.412931       1 server.go:527] "Version info" version="v1.34.1"
	I1029 08:48:01.413206       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 08:48:01.414509       1 config.go:200] "Starting service config controller"
	I1029 08:48:01.414592       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 08:48:01.414674       1 config.go:106] "Starting endpoint slice config controller"
	I1029 08:48:01.414708       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 08:48:01.414746       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 08:48:01.414771       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 08:48:01.437651       1 config.go:309] "Starting node config controller"
	I1029 08:48:01.437795       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 08:48:01.437892       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 08:48:01.521251       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 08:48:01.521390       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 08:48:01.521472       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [67e5abbb69757832239af83063ef76100de2cec956cd044965ac792572fce7d8] <==
	I1029 08:47:52.800319       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1029 08:47:52.800365       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 08:47:52.815921       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1029 08:47:52.816162       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 08:47:52.829112       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1029 08:47:52.834749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1029 08:47:52.816196       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1029 08:47:52.892990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1029 08:47:52.893149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1029 08:47:52.893207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1029 08:47:52.893255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1029 08:47:52.893310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1029 08:47:52.893364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1029 08:47:52.893406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1029 08:47:52.893454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1029 08:47:52.893501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1029 08:47:52.893542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1029 08:47:52.893586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1029 08:47:52.893632       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1029 08:47:52.893673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1029 08:47:52.893723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1029 08:47:52.893786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1029 08:47:52.893831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1029 08:47:52.893871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1029 08:47:52.934773       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 08:47:58 ha-894836 kubelet[799]: E1029 08:47:58.553127     799 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-894836\" already exists" pod="kube-system/etcd-ha-894836"
	Oct 29 08:47:58 ha-894836 kubelet[799]: I1029 08:47:58.553347     799 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ha-894836"
	Oct 29 08:47:58 ha-894836 kubelet[799]: E1029 08:47:58.581653     799 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-894836\" already exists" pod="kube-system/kube-apiserver-ha-894836"
	Oct 29 08:47:58 ha-894836 kubelet[799]: I1029 08:47:58.877519     799 apiserver.go:52] "Watching apiserver"
	Oct 29 08:47:58 ha-894836 kubelet[799]: I1029 08:47:58.897570     799 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-894836" podUID="3304e5b5-10a5-4362-855f-966f12e19513"
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.022914     799 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.027611     799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="984de04af66c2e9a91b240b1eee4ab93" path="/var/lib/kubelet/pods/984de04af66c2e9a91b240b1eee4ab93/volumes"
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.057848     799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/dea5c187-a5ec-4d74-90e1-bf9c51c5bb6f-cni-cfg\") pod \"kindnet-bjfp7\" (UID: \"dea5c187-a5ec-4d74-90e1-bf9c51c5bb6f\") " pod="kube-system/kindnet-bjfp7"
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.071318     799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dea5c187-a5ec-4d74-90e1-bf9c51c5bb6f-lib-modules\") pod \"kindnet-bjfp7\" (UID: \"dea5c187-a5ec-4d74-90e1-bf9c51c5bb6f\") " pod="kube-system/kindnet-bjfp7"
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.071556     799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0ef623f-f7ad-4b5a-8d1e-b08dc6d1ce80-lib-modules\") pod \"kube-proxy-gxrz7\" (UID: \"b0ef623f-f7ad-4b5a-8d1e-b08dc6d1ce80\") " pod="kube-system/kube-proxy-gxrz7"
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.071666     799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dea5c187-a5ec-4d74-90e1-bf9c51c5bb6f-xtables-lock\") pod \"kindnet-bjfp7\" (UID: \"dea5c187-a5ec-4d74-90e1-bf9c51c5bb6f\") " pod="kube-system/kindnet-bjfp7"
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.074936     799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0ef623f-f7ad-4b5a-8d1e-b08dc6d1ce80-xtables-lock\") pod \"kube-proxy-gxrz7\" (UID: \"b0ef623f-f7ad-4b5a-8d1e-b08dc6d1ce80\") " pod="kube-system/kube-proxy-gxrz7"
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.075062     799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/74a003fb-b5cc-4ffa-8560-fd41d1257bd6-tmp\") pod \"storage-provisioner\" (UID: \"74a003fb-b5cc-4ffa-8560-fd41d1257bd6\") " pod="kube-system/storage-provisioner"
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.085145     799 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-894836"
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.085320     799 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-894836"
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.188071     799 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 29 08:47:59 ha-894836 kubelet[799]: W1029 08:47:59.294117     799 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577/crio-fb9556b60baf7c523def4c79090adb05a1fc8173805d4bac0ef0573ad095f5af WatchSource:0}: Error finding container fb9556b60baf7c523def4c79090adb05a1fc8173805d4bac0ef0573ad095f5af: Status 404 returned error can't find the container with id fb9556b60baf7c523def4c79090adb05a1fc8173805d4bac0ef0573ad095f5af
	Oct 29 08:47:59 ha-894836 kubelet[799]: W1029 08:47:59.580481     799 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577/crio-662869c52a2c8133f956cfa328c8268d25d33d960ea2cf7acd20858704627dc0 WatchSource:0}: Error finding container 662869c52a2c8133f956cfa328c8268d25d33d960ea2cf7acd20858704627dc0: Status 404 returned error can't find the container with id 662869c52a2c8133f956cfa328c8268d25d33d960ea2cf7acd20858704627dc0
	Oct 29 08:47:59 ha-894836 kubelet[799]: W1029 08:47:59.659441     799 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577/crio-c6058cbca67d071839a960a649f1de901cec31652fc327f56667100a324eb7e5 WatchSource:0}: Error finding container c6058cbca67d071839a960a649f1de901cec31652fc327f56667100a324eb7e5: Status 404 returned error can't find the container with id c6058cbca67d071839a960a649f1de901cec31652fc327f56667100a324eb7e5
	Oct 29 08:47:59 ha-894836 kubelet[799]: W1029 08:47:59.686006     799 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577/crio-97069c7ad741e21a29e8b1c5b9e77d1159528e8e44e976bd587439e97920f6db WatchSource:0}: Error finding container 97069c7ad741e21a29e8b1c5b9e77d1159528e8e44e976bd587439e97920f6db: Status 404 returned error can't find the container with id 97069c7ad741e21a29e8b1c5b9e77d1159528e8e44e976bd587439e97920f6db
	Oct 29 08:47:59 ha-894836 kubelet[799]: I1029 08:47:59.830139     799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-894836" podStartSLOduration=0.830111939 podStartE2EDuration="830.111939ms" podCreationTimestamp="2025-10-29 08:47:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 08:47:59.71182927 +0000 UTC m=+30.969423145" watchObservedRunningTime="2025-10-29 08:47:59.830111939 +0000 UTC m=+31.087705806"
	Oct 29 08:47:59 ha-894836 kubelet[799]: W1029 08:47:59.917098     799 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577/crio-541c10c0d9e9d889360a0c967d7f0004f27a9816efc8471371b080bd9c9e5b68 WatchSource:0}: Error finding container 541c10c0d9e9d889360a0c967d7f0004f27a9816efc8471371b080bd9c9e5b68: Status 404 returned error can't find the container with id 541c10c0d9e9d889360a0c967d7f0004f27a9816efc8471371b080bd9c9e5b68
	Oct 29 08:48:28 ha-894836 kubelet[799]: E1029 08:48:28.877765     799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00333e33883fd76a53b14a1f8680fa8d01d5e0e724d961b7eeaeb3a0a4a4ed6b\": container with ID starting with 00333e33883fd76a53b14a1f8680fa8d01d5e0e724d961b7eeaeb3a0a4a4ed6b not found: ID does not exist" containerID="00333e33883fd76a53b14a1f8680fa8d01d5e0e724d961b7eeaeb3a0a4a4ed6b"
	Oct 29 08:48:28 ha-894836 kubelet[799]: I1029 08:48:28.877826     799 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="00333e33883fd76a53b14a1f8680fa8d01d5e0e724d961b7eeaeb3a0a4a4ed6b" err="rpc error: code = NotFound desc = could not find container \"00333e33883fd76a53b14a1f8680fa8d01d5e0e724d961b7eeaeb3a0a4a4ed6b\": container with ID starting with 00333e33883fd76a53b14a1f8680fa8d01d5e0e724d961b7eeaeb3a0a4a4ed6b not found: ID does not exist"
	Oct 29 08:48:31 ha-894836 kubelet[799]: I1029 08:48:31.401594     799 scope.go:117] "RemoveContainer" containerID="69e1be8c137eda9847c41a23a137e76dd93f5a10225b59b8180411d6cb08e5d4"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-894836 -n ha-894836
helpers_test.go:269: (dbg) Run:  kubectl --context ha-894836 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-wpcg6
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-894836 describe pod busybox-7b57f96db7-wpcg6
helpers_test.go:290: (dbg) kubectl --context ha-894836 describe pod busybox-7b57f96db7-wpcg6:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-wpcg6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m9tsv (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-m9tsv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  2m    default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  2m    default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  12s   default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  12s   default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (3.10s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.5s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-852077 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-852077 --output=json --user=testUser: exit status 80 (2.496899222s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a38ded8a-e208-40b7-98f8-71f2a3367e8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-852077 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"c31c2641-350d-4e34-a81f-0245c2b5fb70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-29T09:00:47Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"27dd5179-be25-4a08-86c4-073c6e19e120","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-852077 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.50s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.11s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-852077 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-852077 --output=json --user=testUser: exit status 80 (2.10709611s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c3dae226-77ab-44fe-9cce-cb272271945f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-852077 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"e2990321-a97a-4a78-b406-f3633e48a8a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-29T09:00:49Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"d293bcad-f9f0-4c8c-9bcb-b201e7604411","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-852077 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.11s)

                                                
                                    
x
+
TestPreload (447.61s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-026333 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E1029 09:13:13.387257    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-026333 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m3.868762056s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-026333 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-026333 image pull gcr.io/k8s-minikube/busybox: (2.214484315s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-026333
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-026333: (5.910519171s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-026333 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1029 09:15:24.584096    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:17:56.474134    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:18:13.387806    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:18:27.652653    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:20:24.584117    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p test-preload-026333 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: exit status 80 (6m11.547999969s)

                                                
                                                
-- stdout --
	* [test-preload-026333] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21800
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	* Using the docker driver based on existing profile
	* Starting "test-preload-026333" primary control-plane node in "test-preload-026333" cluster
	* Pulling base image v0.0.48-1760939008-21773 ...
	* Downloading Kubernetes v1.32.0 preload ...
	* Preparing Kubernetes v1.32.0 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 09:14:15.733803  127750 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:14:15.734006  127750 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:14:15.734034  127750 out.go:374] Setting ErrFile to fd 2...
	I1029 09:14:15.734053  127750 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:14:15.734350  127750 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 09:14:15.734755  127750 out.go:368] Setting JSON to false
	I1029 09:14:15.735622  127750 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3408,"bootTime":1761725848,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1029 09:14:15.735718  127750 start.go:143] virtualization:  
	I1029 09:14:15.738965  127750 out.go:179] * [test-preload-026333] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1029 09:14:15.742701  127750 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:14:15.742828  127750 notify.go:221] Checking for updates...
	I1029 09:14:15.748496  127750 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:14:15.751358  127750 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:14:15.754293  127750 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	I1029 09:14:15.757130  127750 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1029 09:14:15.760008  127750 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:14:15.763393  127750 config.go:182] Loaded profile config "test-preload-026333": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1029 09:14:15.766882  127750 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1029 09:14:15.769759  127750 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:14:15.800049  127750 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1029 09:14:15.800164  127750 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:14:15.864872  127750 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-29 09:14:15.855258031 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:14:15.864976  127750 docker.go:319] overlay module found
	I1029 09:14:15.868104  127750 out.go:179] * Using the docker driver based on existing profile
	I1029 09:14:15.870996  127750 start.go:309] selected driver: docker
	I1029 09:14:15.871021  127750 start.go:930] validating driver "docker" against &{Name:test-preload-026333 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-026333 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:14:15.871127  127750 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:14:15.871828  127750 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:14:15.932602  127750 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-29 09:14:15.914664887 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:14:15.932926  127750 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:14:15.932966  127750 cni.go:84] Creating CNI manager for ""
	I1029 09:14:15.933026  127750 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:14:15.933076  127750 start.go:353] cluster config:
	{Name:test-preload-026333 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-026333 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:14:15.936121  127750 out.go:179] * Starting "test-preload-026333" primary control-plane node in "test-preload-026333" cluster
	I1029 09:14:15.939009  127750 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:14:15.942088  127750 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:14:15.944846  127750 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1029 09:14:15.944938  127750 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:14:15.963902  127750 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:14:15.963931  127750 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:14:16.001241  127750 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4
	I1029 09:14:16.001268  127750 cache.go:59] Caching tarball of preloaded images
	I1029 09:14:16.001433  127750 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1029 09:14:16.006225  127750 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1029 09:14:16.009109  127750 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1029 09:14:16.102092  127750 preload.go:290] Got checksum from GCS API "d3dc3b83b826438926b7b91af837ed7b"
	I1029 09:14:16.102166  127750 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:d3dc3b83b826438926b7b91af837ed7b -> /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4
	I1029 09:14:19.552944  127750 cache.go:62] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1029 09:14:19.553143  127750 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/test-preload-026333/config.json ...
	I1029 09:14:19.553384  127750 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:14:19.553413  127750 start.go:360] acquireMachinesLock for test-preload-026333: {Name:mke876a7ee2c7a778c48eec68b2f6ad625f5e63b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:14:19.553483  127750 start.go:364] duration metric: took 41.337µs to acquireMachinesLock for "test-preload-026333"
	I1029 09:14:19.553495  127750 start.go:96] Skipping create...Using existing machine configuration
	I1029 09:14:19.553501  127750 fix.go:54] fixHost starting: 
	I1029 09:14:19.553764  127750 cli_runner.go:164] Run: docker container inspect test-preload-026333 --format={{.State.Status}}
	I1029 09:14:19.570203  127750 fix.go:112] recreateIfNeeded on test-preload-026333: state=Stopped err=<nil>
	W1029 09:14:19.570235  127750 fix.go:138] unexpected machine state, will restart: <nil>
	I1029 09:14:19.573563  127750 out.go:252] * Restarting existing docker container for "test-preload-026333" ...
	I1029 09:14:19.573638  127750 cli_runner.go:164] Run: docker start test-preload-026333
	I1029 09:14:19.808338  127750 cli_runner.go:164] Run: docker container inspect test-preload-026333 --format={{.State.Status}}
	I1029 09:14:19.832017  127750 kic.go:430] container "test-preload-026333" state is running.
	I1029 09:14:19.832606  127750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-026333
	I1029 09:14:19.860019  127750 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/test-preload-026333/config.json ...
	I1029 09:14:19.860443  127750 machine.go:94] provisionDockerMachine start ...
	I1029 09:14:19.860573  127750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-026333
	I1029 09:14:19.881097  127750 main.go:143] libmachine: Using SSH client type: native
	I1029 09:14:19.881407  127750 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32953 <nil> <nil>}
	I1029 09:14:19.881416  127750 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 09:14:19.882144  127750 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1029 09:14:23.032332  127750 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-026333
	
	I1029 09:14:23.032358  127750 ubuntu.go:182] provisioning hostname "test-preload-026333"
	I1029 09:14:23.032479  127750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-026333
	I1029 09:14:23.052499  127750 main.go:143] libmachine: Using SSH client type: native
	I1029 09:14:23.052827  127750 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32953 <nil> <nil>}
	I1029 09:14:23.052853  127750 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-026333 && echo "test-preload-026333" | sudo tee /etc/hostname
	I1029 09:14:23.209944  127750 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-026333
	
	I1029 09:14:23.210018  127750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-026333
	I1029 09:14:23.227313  127750 main.go:143] libmachine: Using SSH client type: native
	I1029 09:14:23.227628  127750 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32953 <nil> <nil>}
	I1029 09:14:23.227650  127750 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-026333' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-026333/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-026333' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 09:14:23.376469  127750 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 09:14:23.376499  127750 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-2763/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-2763/.minikube}
	I1029 09:14:23.376517  127750 ubuntu.go:190] setting up certificates
	I1029 09:14:23.376537  127750 provision.go:84] configureAuth start
	I1029 09:14:23.376599  127750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-026333
	I1029 09:14:23.393852  127750 provision.go:143] copyHostCerts
	I1029 09:14:23.393925  127750 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem, removing ...
	I1029 09:14:23.393945  127750 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 09:14:23.394018  127750 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem (1123 bytes)
	I1029 09:14:23.394118  127750 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem, removing ...
	I1029 09:14:23.394127  127750 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 09:14:23.394156  127750 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem (1679 bytes)
	I1029 09:14:23.394221  127750 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem, removing ...
	I1029 09:14:23.394230  127750 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 09:14:23.394257  127750 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem (1082 bytes)
	I1029 09:14:23.394319  127750 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem org=jenkins.test-preload-026333 san=[127.0.0.1 192.168.76.2 localhost minikube test-preload-026333]
	I1029 09:14:23.551682  127750 provision.go:177] copyRemoteCerts
	I1029 09:14:23.551760  127750 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 09:14:23.551828  127750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-026333
	I1029 09:14:23.570825  127750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/test-preload-026333/id_rsa Username:docker}
	I1029 09:14:23.676387  127750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 09:14:23.693267  127750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1029 09:14:23.710471  127750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1029 09:14:23.727144  127750 provision.go:87] duration metric: took 350.589343ms to configureAuth
	I1029 09:14:23.727170  127750 ubuntu.go:206] setting minikube options for container-runtime
	I1029 09:14:23.727359  127750 config.go:182] Loaded profile config "test-preload-026333": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1029 09:14:23.727468  127750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-026333
	I1029 09:14:23.744267  127750 main.go:143] libmachine: Using SSH client type: native
	I1029 09:14:23.744625  127750 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32953 <nil> <nil>}
	I1029 09:14:23.744648  127750 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 09:14:24.055100  127750 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 09:14:24.055128  127750 machine.go:97] duration metric: took 4.194626776s to provisionDockerMachine
	I1029 09:14:24.055139  127750 start.go:293] postStartSetup for "test-preload-026333" (driver="docker")
	I1029 09:14:24.055151  127750 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 09:14:24.055227  127750 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 09:14:24.055272  127750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-026333
	I1029 09:14:24.076276  127750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/test-preload-026333/id_rsa Username:docker}
	I1029 09:14:24.180199  127750 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 09:14:24.183859  127750 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 09:14:24.183901  127750 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 09:14:24.183937  127750 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/addons for local assets ...
	I1029 09:14:24.184029  127750 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/files for local assets ...
	I1029 09:14:24.184138  127750 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> 45502.pem in /etc/ssl/certs
	I1029 09:14:24.184286  127750 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 09:14:24.193193  127750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /etc/ssl/certs/45502.pem (1708 bytes)
	I1029 09:14:24.210652  127750 start.go:296] duration metric: took 155.49644ms for postStartSetup
	I1029 09:14:24.210752  127750 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 09:14:24.210816  127750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-026333
	I1029 09:14:24.228045  127750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/test-preload-026333/id_rsa Username:docker}
	I1029 09:14:24.329567  127750 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 09:14:24.334459  127750 fix.go:56] duration metric: took 4.780951282s for fixHost
	I1029 09:14:24.334485  127750 start.go:83] releasing machines lock for "test-preload-026333", held for 4.780992965s
	I1029 09:14:24.334570  127750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-026333
	I1029 09:14:24.351255  127750 ssh_runner.go:195] Run: cat /version.json
	I1029 09:14:24.351308  127750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-026333
	I1029 09:14:24.351373  127750 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 09:14:24.351441  127750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-026333
	I1029 09:14:24.371882  127750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/test-preload-026333/id_rsa Username:docker}
	I1029 09:14:24.371922  127750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/test-preload-026333/id_rsa Username:docker}
	I1029 09:14:24.570519  127750 ssh_runner.go:195] Run: systemctl --version
	I1029 09:14:24.576978  127750 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 09:14:24.611955  127750 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 09:14:24.616581  127750 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 09:14:24.616699  127750 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 09:14:24.624488  127750 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1029 09:14:24.624515  127750 start.go:496] detecting cgroup driver to use...
	I1029 09:14:24.624558  127750 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1029 09:14:24.624614  127750 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 09:14:24.639536  127750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 09:14:24.652421  127750 docker.go:218] disabling cri-docker service (if available) ...
	I1029 09:14:24.652482  127750 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 09:14:24.668122  127750 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 09:14:24.681850  127750 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 09:14:24.799240  127750 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 09:14:24.918198  127750 docker.go:234] disabling docker service ...
	I1029 09:14:24.918269  127750 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 09:14:24.933467  127750 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 09:14:24.946404  127750 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 09:14:25.054188  127750 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 09:14:25.167565  127750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 09:14:25.180738  127750 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 09:14:25.195319  127750 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1029 09:14:25.195431  127750 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:14:25.204806  127750 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1029 09:14:25.204872  127750 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:14:25.214211  127750 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:14:25.223201  127750 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:14:25.231718  127750 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 09:14:25.239811  127750 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:14:25.248719  127750 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:14:25.256896  127750 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:14:25.265380  127750 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 09:14:25.272513  127750 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 09:14:25.279589  127750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:14:25.392013  127750 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 09:14:25.517665  127750 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 09:14:25.517732  127750 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 09:14:25.521445  127750 start.go:564] Will wait 60s for crictl version
	I1029 09:14:25.521513  127750 ssh_runner.go:195] Run: which crictl
	I1029 09:14:25.524847  127750 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 09:14:25.548398  127750 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 09:14:25.548487  127750 ssh_runner.go:195] Run: crio --version
	I1029 09:14:25.579595  127750 ssh_runner.go:195] Run: crio --version
	I1029 09:14:25.610935  127750 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.34.1 ...
	I1029 09:14:25.613771  127750 cli_runner.go:164] Run: docker network inspect test-preload-026333 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:14:25.629877  127750 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1029 09:14:25.633740  127750 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:14:25.643098  127750 kubeadm.go:884] updating cluster {Name:test-preload-026333 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-026333 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 09:14:25.643217  127750 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1029 09:14:25.643280  127750 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:14:25.681280  127750 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:14:25.681305  127750 crio.go:433] Images already preloaded, skipping extraction
	I1029 09:14:25.681364  127750 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:14:25.705548  127750 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:14:25.705620  127750 cache_images.go:86] Images are preloaded, skipping loading
	I1029 09:14:25.705642  127750 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.32.0 crio true true} ...
	I1029 09:14:25.705779  127750 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=test-preload-026333 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-026333 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 09:14:25.705875  127750 ssh_runner.go:195] Run: crio config
	I1029 09:14:25.775992  127750 cni.go:84] Creating CNI manager for ""
	I1029 09:14:25.776057  127750 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:14:25.776096  127750 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1029 09:14:25.776151  127750 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-026333 NodeName:test-preload-026333 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 09:14:25.776358  127750 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-026333"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 09:14:25.776445  127750 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1029 09:14:25.784158  127750 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 09:14:25.784280  127750 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 09:14:25.791815  127750 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1029 09:14:25.804747  127750 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 09:14:25.817844  127750 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1029 09:14:25.830991  127750 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1029 09:14:25.834853  127750 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:14:25.845029  127750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:14:25.958563  127750 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:14:25.973952  127750 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/test-preload-026333 for IP: 192.168.76.2
	I1029 09:14:25.973974  127750 certs.go:195] generating shared ca certs ...
	I1029 09:14:25.973990  127750 certs.go:227] acquiring lock for ca certs: {Name:mk715611293338c39436fe9072c5ce0c2fb993a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:14:25.974189  127750 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key
	I1029 09:14:25.974263  127750 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key
	I1029 09:14:25.974278  127750 certs.go:257] generating profile certs ...
	I1029 09:14:25.974383  127750 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/test-preload-026333/client.key
	I1029 09:14:25.974481  127750 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/test-preload-026333/apiserver.key.94a5d5ab
	I1029 09:14:25.974561  127750 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/test-preload-026333/proxy-client.key
	I1029 09:14:25.974694  127750 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem (1338 bytes)
	W1029 09:14:25.974745  127750 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550_empty.pem, impossibly tiny 0 bytes
	I1029 09:14:25.974760  127750 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem (1679 bytes)
	I1029 09:14:25.974792  127750 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem (1082 bytes)
	I1029 09:14:25.974847  127750 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem (1123 bytes)
	I1029 09:14:25.974879  127750 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem (1679 bytes)
	I1029 09:14:25.974943  127750 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem (1708 bytes)
	I1029 09:14:25.983287  127750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 09:14:26.000865  127750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 09:14:26.024965  127750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 09:14:26.043179  127750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1029 09:14:26.064040  127750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/test-preload-026333/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1029 09:14:26.085305  127750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/test-preload-026333/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 09:14:26.106608  127750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/test-preload-026333/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 09:14:26.133437  127750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/test-preload-026333/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1029 09:14:26.156156  127750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 09:14:26.177677  127750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem --> /usr/share/ca-certificates/4550.pem (1338 bytes)
	I1029 09:14:26.200120  127750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /usr/share/ca-certificates/45502.pem (1708 bytes)
	I1029 09:14:26.219305  127750 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 09:14:26.232675  127750 ssh_runner.go:195] Run: openssl version
	I1029 09:14:26.239070  127750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 09:14:26.247632  127750 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:14:26.251555  127750 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:14:26.251638  127750 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:14:26.292144  127750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 09:14:26.300143  127750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4550.pem && ln -fs /usr/share/ca-certificates/4550.pem /etc/ssl/certs/4550.pem"
	I1029 09:14:26.308505  127750 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4550.pem
	I1029 09:14:26.312337  127750 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:27 /usr/share/ca-certificates/4550.pem
	I1029 09:14:26.312415  127750 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4550.pem
	I1029 09:14:26.354784  127750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4550.pem /etc/ssl/certs/51391683.0"
	I1029 09:14:26.362842  127750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/45502.pem && ln -fs /usr/share/ca-certificates/45502.pem /etc/ssl/certs/45502.pem"
	I1029 09:14:26.371135  127750 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/45502.pem
	I1029 09:14:26.375165  127750 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:27 /usr/share/ca-certificates/45502.pem
	I1029 09:14:26.375271  127750 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/45502.pem
	I1029 09:14:26.416060  127750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/45502.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 09:14:26.424214  127750 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 09:14:26.427881  127750 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1029 09:14:26.468796  127750 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1029 09:14:26.510321  127750 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1029 09:14:26.551281  127750 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1029 09:14:26.599093  127750 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1029 09:14:26.644999  127750 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1029 09:14:26.694616  127750 kubeadm.go:401] StartCluster: {Name:test-preload-026333 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-026333 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:14:26.694748  127750 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 09:14:26.694857  127750 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 09:14:26.751493  127750 cri.go:89] found id: "1a06060e119e901d88b4d94b289efe0dbe69287388960cb1454beaca34c041d7"
	I1029 09:14:26.751559  127750 cri.go:89] found id: "9b4d431126cf885bbec493c55f7661ffa6441f9ed245ca08010fc77559325294"
	I1029 09:14:26.751583  127750 cri.go:89] found id: ""
	I1029 09:14:26.751680  127750 ssh_runner.go:195] Run: sudo runc list -f json
	W1029 09:14:26.779972  127750 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:14:26Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:14:26.780095  127750 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 09:14:26.798147  127750 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1029 09:14:26.798207  127750 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1029 09:14:26.798293  127750 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1029 09:14:26.812568  127750 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1029 09:14:26.813073  127750 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-026333" does not appear in /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:14:26.813233  127750 kubeconfig.go:62] /home/jenkins/minikube-integration/21800-2763/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-026333" cluster setting kubeconfig missing "test-preload-026333" context setting]
	I1029 09:14:26.813574  127750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:14:26.814185  127750 kapi.go:59] client config for test-preload-026333: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/test-preload-026333/client.crt", KeyFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/test-preload-026333/client.key", CAFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1029 09:14:26.814749  127750 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1029 09:14:26.814918  127750 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1029 09:14:26.814944  127750 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1029 09:14:26.814970  127750 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1029 09:14:26.815001  127750 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1029 09:14:26.815354  127750 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1029 09:14:26.824842  127750 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1029 09:14:26.824916  127750 kubeadm.go:602] duration metric: took 26.689149ms to restartPrimaryControlPlane
	I1029 09:14:26.824940  127750 kubeadm.go:403] duration metric: took 130.332763ms to StartCluster
	I1029 09:14:26.824983  127750 settings.go:142] acquiring lock: {Name:mkeba1c0d8e3656c2a05b6b6f1f81184498df216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:14:26.825061  127750 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:14:26.825746  127750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:14:26.826030  127750 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:14:26.826369  127750 config.go:182] Loaded profile config "test-preload-026333": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1029 09:14:26.826437  127750 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 09:14:26.826550  127750 addons.go:70] Setting storage-provisioner=true in profile "test-preload-026333"
	I1029 09:14:26.826590  127750 addons.go:239] Setting addon storage-provisioner=true in "test-preload-026333"
	W1029 09:14:26.826631  127750 addons.go:248] addon storage-provisioner should already be in state true
	I1029 09:14:26.826674  127750 host.go:66] Checking if "test-preload-026333" exists ...
	I1029 09:14:26.826562  127750 addons.go:70] Setting default-storageclass=true in profile "test-preload-026333"
	I1029 09:14:26.826751  127750 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-026333"
	I1029 09:14:26.827063  127750 cli_runner.go:164] Run: docker container inspect test-preload-026333 --format={{.State.Status}}
	I1029 09:14:26.827416  127750 cli_runner.go:164] Run: docker container inspect test-preload-026333 --format={{.State.Status}}
	I1029 09:14:26.830033  127750 out.go:179] * Verifying Kubernetes components...
	I1029 09:14:26.833573  127750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:14:26.854783  127750 kapi.go:59] client config for test-preload-026333: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/test-preload-026333/client.crt", KeyFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/test-preload-026333/client.key", CAFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1029 09:14:26.855121  127750 addons.go:239] Setting addon default-storageclass=true in "test-preload-026333"
	W1029 09:14:26.855132  127750 addons.go:248] addon default-storageclass should already be in state true
	I1029 09:14:26.855157  127750 host.go:66] Checking if "test-preload-026333" exists ...
	I1029 09:14:26.855572  127750 cli_runner.go:164] Run: docker container inspect test-preload-026333 --format={{.State.Status}}
	I1029 09:14:26.869771  127750 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 09:14:26.874492  127750 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:14:26.874521  127750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1029 09:14:26.874585  127750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-026333
	I1029 09:14:26.903033  127750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/test-preload-026333/id_rsa Username:docker}
	I1029 09:14:26.911135  127750 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1029 09:14:26.911162  127750 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1029 09:14:26.911221  127750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-026333
	I1029 09:14:26.943387  127750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/test-preload-026333/id_rsa Username:docker}
	I1029 09:14:27.057468  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:14:27.092814  127750 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:14:27.170808  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1029 09:14:27.217687  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:27.217727  127750 retry.go:31] will retry after 187.869988ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:27.217779  127750 node_ready.go:35] waiting up to 6m0s for node "test-preload-026333" to be "Ready" ...
	W1029 09:14:27.259582  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:27.259615  127750 retry.go:31] will retry after 362.325871ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:27.405963  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1029 09:14:27.499094  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:27.499132  127750 retry.go:31] will retry after 538.420375ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:27.622504  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1029 09:14:27.707298  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:27.707331  127750 retry.go:31] will retry after 475.010933ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:28.038520  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1029 09:14:28.130535  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:28.130575  127750 retry.go:31] will retry after 720.921575ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:28.182833  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1029 09:14:28.270019  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:28.270064  127750 retry.go:31] will retry after 329.492424ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:28.600611  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1029 09:14:28.673970  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:28.674003  127750 retry.go:31] will retry after 446.185521ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:28.851706  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1029 09:14:28.918415  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:28.918445  127750 retry.go:31] will retry after 693.450795ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:29.120772  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1029 09:14:29.183677  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:29.183707  127750 retry.go:31] will retry after 1.019782998s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 09:14:29.219356  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:14:29.612937  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1029 09:14:29.682699  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:29.682732  127750 retry.go:31] will retry after 1.493576921s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:30.203846  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1029 09:14:30.275812  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:30.275841  127750 retry.go:31] will retry after 2.815493883s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:31.177162  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1029 09:14:31.241955  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:31.241989  127750 retry.go:31] will retry after 1.708252829s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 09:14:31.718878  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:14:32.950431  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1029 09:14:33.018251  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:33.018285  127750 retry.go:31] will retry after 2.741915957s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:33.092467  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1029 09:14:33.154991  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:33.155019  127750 retry.go:31] will retry after 2.702777697s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 09:14:34.218369  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:14:35.760763  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1029 09:14:35.824983  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:35.825023  127750 retry.go:31] will retry after 5.23592771s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:35.858179  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1029 09:14:35.921695  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:35.921733  127750 retry.go:31] will retry after 6.187569302s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 09:14:36.718285  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:14:39.218387  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:14:41.062012  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1029 09:14:41.149567  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:41.149596  127750 retry.go:31] will retry after 8.520776941s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 09:14:41.219136  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:14:42.109542  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1029 09:14:42.183204  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:42.183239  127750 retry.go:31] will retry after 8.000217858s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 09:14:43.719058  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:14:46.218864  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:14:48.219306  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:14:49.670576  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1029 09:14:49.734362  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:49.734391  127750 retry.go:31] will retry after 14.323518178s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:50.183695  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1029 09:14:50.245617  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:50.245649  127750 retry.go:31] will retry after 11.34625372s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 09:14:50.718334  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:14:52.718385  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:14:55.218365  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:14:57.718395  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:14:59.719239  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:15:01.593011  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1029 09:15:01.658522  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:15:01.658556  127750 retry.go:31] will retry after 20.875822493s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 09:15:02.219228  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:15:04.058926  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1029 09:15:04.126011  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:15:04.126042  127750 retry.go:31] will retry after 13.431262762s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 09:15:04.718321  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:07.218385  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:09.718305  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:11.719267  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:14.218278  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:16.218336  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:15:17.557989  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1029 09:15:17.629267  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:15:17.629305  127750 retry.go:31] will retry after 26.163170309s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 09:15:18.218811  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:20.219161  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:22.219327  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:15:22.534605  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1029 09:15:22.598106  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:15:22.598139  127750 retry.go:31] will retry after 27.115132993s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 09:15:24.718343  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:27.218313  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:29.719253  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:32.219077  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:34.718329  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:36.718470  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:39.218357  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:41.718279  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:15:43.793547  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1029 09:15:43.883382  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:15:43.883409  127750 retry.go:31] will retry after 25.650219697s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 09:15:44.218287  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:46.218502  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:48.219050  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:15:49.713486  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1029 09:15:49.775749  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:15:49.775781  127750 retry.go:31] will retry after 17.846460358s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 09:15:50.718380  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:52.719143  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:54.719284  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:57.218283  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:59.219263  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:01.718373  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:04.219283  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:06.718274  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:16:07.622899  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1029 09:16:07.685256  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 09:16:07.685350  127750 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1029 09:16:08.718361  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:16:09.533805  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1029 09:16:09.601928  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 09:16:09.602026  127750 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1029 09:16:09.605063  127750 out.go:179] * Enabled addons: 
	I1029 09:16:09.607875  127750 addons.go:515] duration metric: took 1m42.781416348s for enable addons: enabled=[]
	W1029 09:16:11.218349  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:13.219301  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:15.718315  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:18.218309  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:20.719345  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:23.218393  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:25.719284  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:28.219258  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:30.718276  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:32.718331  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:35.218266  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:37.218330  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:39.718317  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:42.218368  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:44.718969  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:46.719224  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:49.218412  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:51.218576  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:53.718371  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:55.718415  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:58.218341  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:00.218497  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:02.718415  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:05.218481  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:07.718431  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:10.218995  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:12.718316  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:15.218265  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:17.218389  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:19.219221  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:21.718367  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:24.218282  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:26.718335  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:29.219269  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:31.718278  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:34.218326  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:36.218403  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:38.718413  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:40.719296  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:43.218320  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:45.218412  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:47.218676  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:49.718426  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:52.218358  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:54.718263  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:56.718393  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:58.718600  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:01.218540  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:03.719323  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:06.218304  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:08.719322  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:11.218430  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:13.718284  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:15.718399  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:18.218368  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:20.218439  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:22.719092  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:24.719232  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:27.218951  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:29.718327  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:32.219280  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:34.719254  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:37.218279  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:39.219330  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:41.719094  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:43.719335  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:46.218865  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:48.718213  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:50.718325  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:53.218247  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:55.218314  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:57.718311  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:00.218296  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:02.218338  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:04.718347  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:07.218317  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:09.719080  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:12.218861  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:14.219013  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:16.219175  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:18.718338  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:21.218430  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:23.718288  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:26.218332  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:28.218385  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:30.718983  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:32.719351  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:35.219281  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:37.718474  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:40.218289  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:42.218500  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:44.718272  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:46.718422  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:49.218363  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:51.218552  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:53.718205  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:55.718302  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:57.719301  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:20:00.218877  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:20:02.718370  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:20:05.218362  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:20:07.718340  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:20:09.718391  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:20:12.218422  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:20:14.718334  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:20:17.218415  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:20:19.718447  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:20:22.218354  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:20:24.718313  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:20:26.718375  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:20:27.217978  127750 node_ready.go:38] duration metric: took 6m0.000169472s for node "test-preload-026333" to be "Ready" ...
	I1029 09:20:27.221078  127750 out.go:203] 
	W1029 09:20:27.223935  127750 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1029 09:20:27.223961  127750 out.go:285] * 
	* 
	W1029 09:20:27.226118  127750 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 09:20:27.228915  127750 out.go:203] 

                                                
                                                
** /stderr **
preload_test.go:67: out/minikube-linux-arm64 start -p test-preload-026333 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio failed: exit status 80
panic.go:636: *** TestPreload FAILED at 2025-10-29 09:20:27.292473936 +0000 UTC m=+3610.312051079
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect test-preload-026333
helpers_test.go:243: (dbg) docker inspect test-preload-026333:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "972d59663267605010858a7e61ffb373a660294be266ebc603bf32c7a454060c",
	        "Created": "2025-10-29T09:13:04.884894027Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 127876,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:14:19.606095759Z",
	            "FinishedAt": "2025-10-29T09:14:15.433742839Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/972d59663267605010858a7e61ffb373a660294be266ebc603bf32c7a454060c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/972d59663267605010858a7e61ffb373a660294be266ebc603bf32c7a454060c/hostname",
	        "HostsPath": "/var/lib/docker/containers/972d59663267605010858a7e61ffb373a660294be266ebc603bf32c7a454060c/hosts",
	        "LogPath": "/var/lib/docker/containers/972d59663267605010858a7e61ffb373a660294be266ebc603bf32c7a454060c/972d59663267605010858a7e61ffb373a660294be266ebc603bf32c7a454060c-json.log",
	        "Name": "/test-preload-026333",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "test-preload-026333:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "test-preload-026333",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "972d59663267605010858a7e61ffb373a660294be266ebc603bf32c7a454060c",
	                "LowerDir": "/var/lib/docker/overlay2/e50b1a04c7841b8ae7506790edf37f4005c617ffe247be365220be692a54e505-init/diff:/var/lib/docker/overlay2/512c003c31c6c889d7180677d0a7d04f9641d651bb7c0f829b95ca7e47c2836c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e50b1a04c7841b8ae7506790edf37f4005c617ffe247be365220be692a54e505/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e50b1a04c7841b8ae7506790edf37f4005c617ffe247be365220be692a54e505/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e50b1a04c7841b8ae7506790edf37f4005c617ffe247be365220be692a54e505/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "test-preload-026333",
	                "Source": "/var/lib/docker/volumes/test-preload-026333/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "test-preload-026333",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "test-preload-026333",
	                "name.minikube.sigs.k8s.io": "test-preload-026333",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d74892b3e7f7aec420a358f1bf4722c4d5d1d5536e5e0c99c1d85e2e050fa884",
	            "SandboxKey": "/var/run/docker/netns/d74892b3e7f7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32953"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32954"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32957"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32955"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32956"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "test-preload-026333": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:fa:6f:e5:5f:52",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2e6af0cf02b59dac587445c3f7c5d4e07f7a1cc74a87c743be653627c0f9f097",
	                    "EndpointID": "dcd82055b437f0a0b5d0c910814857ed4e825d80ae9be78d58d8f3f3c3b1e254",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "test-preload-026333",
	                        "972d59663267"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p test-preload-026333 -n test-preload-026333
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p test-preload-026333 -n test-preload-026333: exit status 2 (313.044367ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-026333 logs -n 25
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ multinode-279229 cp multinode-279229-m03:/home/docker/cp-test.txt multinode-279229:/home/docker/cp-test_multinode-279229-m03_multinode-279229.txt         │ multinode-279229     │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ multinode-279229 ssh -n multinode-279229-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-279229     │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ multinode-279229 ssh -n multinode-279229 sudo cat /home/docker/cp-test_multinode-279229-m03_multinode-279229.txt                                          │ multinode-279229     │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ cp      │ multinode-279229 cp multinode-279229-m03:/home/docker/cp-test.txt multinode-279229-m02:/home/docker/cp-test_multinode-279229-m03_multinode-279229-m02.txt │ multinode-279229     │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ multinode-279229 ssh -n multinode-279229-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-279229     │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ multinode-279229 ssh -n multinode-279229-m02 sudo cat /home/docker/cp-test_multinode-279229-m03_multinode-279229-m02.txt                                  │ multinode-279229     │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ node    │ multinode-279229 node stop m03                                                                                                                            │ multinode-279229     │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ node    │ multinode-279229 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-279229     │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ node    │ list -p multinode-279229                                                                                                                                  │ multinode-279229     │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ stop    │ -p multinode-279229                                                                                                                                       │ multinode-279229     │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:10 UTC │
	│ start   │ -p multinode-279229 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-279229     │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ node    │ list -p multinode-279229                                                                                                                                  │ multinode-279229     │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ node    │ multinode-279229 node delete m03                                                                                                                          │ multinode-279229     │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ stop    │ multinode-279229 stop                                                                                                                                     │ multinode-279229     │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ start   │ -p multinode-279229 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio                                                          │ multinode-279229     │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:12 UTC │
	│ node    │ list -p multinode-279229                                                                                                                                  │ multinode-279229     │ jenkins │ v1.37.0 │ 29 Oct 25 09:12 UTC │                     │
	│ start   │ -p multinode-279229-m02 --driver=docker  --container-runtime=crio                                                                                         │ multinode-279229-m02 │ jenkins │ v1.37.0 │ 29 Oct 25 09:12 UTC │                     │
	│ start   │ -p multinode-279229-m03 --driver=docker  --container-runtime=crio                                                                                         │ multinode-279229-m03 │ jenkins │ v1.37.0 │ 29 Oct 25 09:12 UTC │ 29 Oct 25 09:12 UTC │
	│ node    │ add -p multinode-279229                                                                                                                                   │ multinode-279229     │ jenkins │ v1.37.0 │ 29 Oct 25 09:12 UTC │                     │
	│ delete  │ -p multinode-279229-m03                                                                                                                                   │ multinode-279229-m03 │ jenkins │ v1.37.0 │ 29 Oct 25 09:12 UTC │ 29 Oct 25 09:12 UTC │
	│ delete  │ -p multinode-279229                                                                                                                                       │ multinode-279229     │ jenkins │ v1.37.0 │ 29 Oct 25 09:12 UTC │ 29 Oct 25 09:13 UTC │
	│ start   │ -p test-preload-026333 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0 │ test-preload-026333  │ jenkins │ v1.37.0 │ 29 Oct 25 09:13 UTC │ 29 Oct 25 09:14 UTC │
	│ image   │ test-preload-026333 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-026333  │ jenkins │ v1.37.0 │ 29 Oct 25 09:14 UTC │ 29 Oct 25 09:14 UTC │
	│ stop    │ -p test-preload-026333                                                                                                                                    │ test-preload-026333  │ jenkins │ v1.37.0 │ 29 Oct 25 09:14 UTC │ 29 Oct 25 09:14 UTC │
	│ start   │ -p test-preload-026333 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                                         │ test-preload-026333  │ jenkins │ v1.37.0 │ 29 Oct 25 09:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:14:15
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:14:15.733803  127750 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:14:15.734006  127750 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:14:15.734034  127750 out.go:374] Setting ErrFile to fd 2...
	I1029 09:14:15.734053  127750 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:14:15.734350  127750 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 09:14:15.734755  127750 out.go:368] Setting JSON to false
	I1029 09:14:15.735622  127750 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3408,"bootTime":1761725848,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1029 09:14:15.735718  127750 start.go:143] virtualization:  
	I1029 09:14:15.738965  127750 out.go:179] * [test-preload-026333] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1029 09:14:15.742701  127750 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:14:15.742828  127750 notify.go:221] Checking for updates...
	I1029 09:14:15.748496  127750 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:14:15.751358  127750 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:14:15.754293  127750 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	I1029 09:14:15.757130  127750 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1029 09:14:15.760008  127750 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:14:15.763393  127750 config.go:182] Loaded profile config "test-preload-026333": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1029 09:14:15.766882  127750 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1029 09:14:15.769759  127750 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:14:15.800049  127750 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1029 09:14:15.800164  127750 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:14:15.864872  127750 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-29 09:14:15.855258031 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:14:15.864976  127750 docker.go:319] overlay module found
	I1029 09:14:15.868104  127750 out.go:179] * Using the docker driver based on existing profile
	I1029 09:14:15.870996  127750 start.go:309] selected driver: docker
	I1029 09:14:15.871021  127750 start.go:930] validating driver "docker" against &{Name:test-preload-026333 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-026333 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:14:15.871127  127750 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:14:15.871828  127750 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:14:15.932602  127750 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-29 09:14:15.914664887 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:14:15.932926  127750 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:14:15.932966  127750 cni.go:84] Creating CNI manager for ""
	I1029 09:14:15.933026  127750 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:14:15.933076  127750 start.go:353] cluster config:
	{Name:test-preload-026333 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-026333 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:14:15.936121  127750 out.go:179] * Starting "test-preload-026333" primary control-plane node in "test-preload-026333" cluster
	I1029 09:14:15.939009  127750 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:14:15.942088  127750 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:14:15.944846  127750 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1029 09:14:15.944938  127750 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:14:15.963902  127750 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:14:15.963931  127750 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:14:16.001241  127750 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4
	I1029 09:14:16.001268  127750 cache.go:59] Caching tarball of preloaded images
	I1029 09:14:16.001433  127750 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1029 09:14:16.006225  127750 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1029 09:14:16.009109  127750 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1029 09:14:16.102092  127750 preload.go:290] Got checksum from GCS API "d3dc3b83b826438926b7b91af837ed7b"
	I1029 09:14:16.102166  127750 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:d3dc3b83b826438926b7b91af837ed7b -> /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4
	I1029 09:14:19.552944  127750 cache.go:62] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1029 09:14:19.553143  127750 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/test-preload-026333/config.json ...
	I1029 09:14:19.553384  127750 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:14:19.553413  127750 start.go:360] acquireMachinesLock for test-preload-026333: {Name:mke876a7ee2c7a778c48eec68b2f6ad625f5e63b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:14:19.553483  127750 start.go:364] duration metric: took 41.337µs to acquireMachinesLock for "test-preload-026333"
	I1029 09:14:19.553495  127750 start.go:96] Skipping create...Using existing machine configuration
	I1029 09:14:19.553501  127750 fix.go:54] fixHost starting: 
	I1029 09:14:19.553764  127750 cli_runner.go:164] Run: docker container inspect test-preload-026333 --format={{.State.Status}}
	I1029 09:14:19.570203  127750 fix.go:112] recreateIfNeeded on test-preload-026333: state=Stopped err=<nil>
	W1029 09:14:19.570235  127750 fix.go:138] unexpected machine state, will restart: <nil>
	I1029 09:14:19.573563  127750 out.go:252] * Restarting existing docker container for "test-preload-026333" ...
	I1029 09:14:19.573638  127750 cli_runner.go:164] Run: docker start test-preload-026333
	I1029 09:14:19.808338  127750 cli_runner.go:164] Run: docker container inspect test-preload-026333 --format={{.State.Status}}
	I1029 09:14:19.832017  127750 kic.go:430] container "test-preload-026333" state is running.
	I1029 09:14:19.832606  127750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-026333
	I1029 09:14:19.860019  127750 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/test-preload-026333/config.json ...
	I1029 09:14:19.860443  127750 machine.go:94] provisionDockerMachine start ...
	I1029 09:14:19.860573  127750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-026333
	I1029 09:14:19.881097  127750 main.go:143] libmachine: Using SSH client type: native
	I1029 09:14:19.881407  127750 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32953 <nil> <nil>}
	I1029 09:14:19.881416  127750 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 09:14:19.882144  127750 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1029 09:14:23.032332  127750 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-026333
	
	I1029 09:14:23.032358  127750 ubuntu.go:182] provisioning hostname "test-preload-026333"
	I1029 09:14:23.032479  127750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-026333
	I1029 09:14:23.052499  127750 main.go:143] libmachine: Using SSH client type: native
	I1029 09:14:23.052827  127750 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32953 <nil> <nil>}
	I1029 09:14:23.052853  127750 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-026333 && echo "test-preload-026333" | sudo tee /etc/hostname
	I1029 09:14:23.209944  127750 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-026333
	
	I1029 09:14:23.210018  127750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-026333
	I1029 09:14:23.227313  127750 main.go:143] libmachine: Using SSH client type: native
	I1029 09:14:23.227628  127750 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32953 <nil> <nil>}
	I1029 09:14:23.227650  127750 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-026333' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-026333/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-026333' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 09:14:23.376469  127750 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 09:14:23.376499  127750 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-2763/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-2763/.minikube}
	I1029 09:14:23.376517  127750 ubuntu.go:190] setting up certificates
	I1029 09:14:23.376537  127750 provision.go:84] configureAuth start
	I1029 09:14:23.376599  127750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-026333
	I1029 09:14:23.393852  127750 provision.go:143] copyHostCerts
	I1029 09:14:23.393925  127750 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem, removing ...
	I1029 09:14:23.393945  127750 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 09:14:23.394018  127750 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem (1123 bytes)
	I1029 09:14:23.394118  127750 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem, removing ...
	I1029 09:14:23.394127  127750 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 09:14:23.394156  127750 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem (1679 bytes)
	I1029 09:14:23.394221  127750 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem, removing ...
	I1029 09:14:23.394230  127750 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 09:14:23.394257  127750 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem (1082 bytes)
	I1029 09:14:23.394319  127750 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem org=jenkins.test-preload-026333 san=[127.0.0.1 192.168.76.2 localhost minikube test-preload-026333]
	I1029 09:14:23.551682  127750 provision.go:177] copyRemoteCerts
	I1029 09:14:23.551760  127750 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 09:14:23.551828  127750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-026333
	I1029 09:14:23.570825  127750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/test-preload-026333/id_rsa Username:docker}
	I1029 09:14:23.676387  127750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 09:14:23.693267  127750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1029 09:14:23.710471  127750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1029 09:14:23.727144  127750 provision.go:87] duration metric: took 350.589343ms to configureAuth
	I1029 09:14:23.727170  127750 ubuntu.go:206] setting minikube options for container-runtime
	I1029 09:14:23.727359  127750 config.go:182] Loaded profile config "test-preload-026333": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1029 09:14:23.727468  127750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-026333
	I1029 09:14:23.744267  127750 main.go:143] libmachine: Using SSH client type: native
	I1029 09:14:23.744625  127750 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32953 <nil> <nil>}
	I1029 09:14:23.744648  127750 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 09:14:24.055100  127750 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 09:14:24.055128  127750 machine.go:97] duration metric: took 4.194626776s to provisionDockerMachine
	I1029 09:14:24.055139  127750 start.go:293] postStartSetup for "test-preload-026333" (driver="docker")
	I1029 09:14:24.055151  127750 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 09:14:24.055227  127750 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 09:14:24.055272  127750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-026333
	I1029 09:14:24.076276  127750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/test-preload-026333/id_rsa Username:docker}
	I1029 09:14:24.180199  127750 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 09:14:24.183859  127750 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 09:14:24.183901  127750 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 09:14:24.183937  127750 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/addons for local assets ...
	I1029 09:14:24.184029  127750 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/files for local assets ...
	I1029 09:14:24.184138  127750 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> 45502.pem in /etc/ssl/certs
	I1029 09:14:24.184286  127750 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 09:14:24.193193  127750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /etc/ssl/certs/45502.pem (1708 bytes)
	I1029 09:14:24.210652  127750 start.go:296] duration metric: took 155.49644ms for postStartSetup
	I1029 09:14:24.210752  127750 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 09:14:24.210816  127750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-026333
	I1029 09:14:24.228045  127750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/test-preload-026333/id_rsa Username:docker}
	I1029 09:14:24.329567  127750 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 09:14:24.334459  127750 fix.go:56] duration metric: took 4.780951282s for fixHost
	I1029 09:14:24.334485  127750 start.go:83] releasing machines lock for "test-preload-026333", held for 4.780992965s
	I1029 09:14:24.334570  127750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-026333
	I1029 09:14:24.351255  127750 ssh_runner.go:195] Run: cat /version.json
	I1029 09:14:24.351308  127750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-026333
	I1029 09:14:24.351373  127750 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 09:14:24.351441  127750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-026333
	I1029 09:14:24.371882  127750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/test-preload-026333/id_rsa Username:docker}
	I1029 09:14:24.371922  127750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/test-preload-026333/id_rsa Username:docker}
	I1029 09:14:24.570519  127750 ssh_runner.go:195] Run: systemctl --version
	I1029 09:14:24.576978  127750 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 09:14:24.611955  127750 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 09:14:24.616581  127750 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 09:14:24.616699  127750 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 09:14:24.624488  127750 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1029 09:14:24.624515  127750 start.go:496] detecting cgroup driver to use...
	I1029 09:14:24.624558  127750 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1029 09:14:24.624614  127750 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 09:14:24.639536  127750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 09:14:24.652421  127750 docker.go:218] disabling cri-docker service (if available) ...
	I1029 09:14:24.652482  127750 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 09:14:24.668122  127750 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 09:14:24.681850  127750 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 09:14:24.799240  127750 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 09:14:24.918198  127750 docker.go:234] disabling docker service ...
	I1029 09:14:24.918269  127750 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 09:14:24.933467  127750 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 09:14:24.946404  127750 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 09:14:25.054188  127750 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 09:14:25.167565  127750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 09:14:25.180738  127750 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 09:14:25.195319  127750 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1029 09:14:25.195431  127750 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:14:25.204806  127750 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1029 09:14:25.204872  127750 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:14:25.214211  127750 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:14:25.223201  127750 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:14:25.231718  127750 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 09:14:25.239811  127750 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:14:25.248719  127750 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:14:25.256896  127750 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:14:25.265380  127750 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 09:14:25.272513  127750 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 09:14:25.279589  127750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:14:25.392013  127750 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 09:14:25.517665  127750 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 09:14:25.517732  127750 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 09:14:25.521445  127750 start.go:564] Will wait 60s for crictl version
	I1029 09:14:25.521513  127750 ssh_runner.go:195] Run: which crictl
	I1029 09:14:25.524847  127750 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 09:14:25.548398  127750 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 09:14:25.548487  127750 ssh_runner.go:195] Run: crio --version
	I1029 09:14:25.579595  127750 ssh_runner.go:195] Run: crio --version
	I1029 09:14:25.610935  127750 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.34.1 ...
	I1029 09:14:25.613771  127750 cli_runner.go:164] Run: docker network inspect test-preload-026333 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:14:25.629877  127750 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1029 09:14:25.633740  127750 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:14:25.643098  127750 kubeadm.go:884] updating cluster {Name:test-preload-026333 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-026333 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 09:14:25.643217  127750 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1029 09:14:25.643280  127750 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:14:25.681280  127750 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:14:25.681305  127750 crio.go:433] Images already preloaded, skipping extraction
	I1029 09:14:25.681364  127750 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:14:25.705548  127750 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:14:25.705620  127750 cache_images.go:86] Images are preloaded, skipping loading
	I1029 09:14:25.705642  127750 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.32.0 crio true true} ...
	I1029 09:14:25.705779  127750 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=test-preload-026333 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-026333 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 09:14:25.705875  127750 ssh_runner.go:195] Run: crio config
	I1029 09:14:25.775992  127750 cni.go:84] Creating CNI manager for ""
	I1029 09:14:25.776057  127750 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:14:25.776096  127750 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1029 09:14:25.776151  127750 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-026333 NodeName:test-preload-026333 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 09:14:25.776358  127750 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-026333"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 09:14:25.776445  127750 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1029 09:14:25.784158  127750 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 09:14:25.784280  127750 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 09:14:25.791815  127750 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1029 09:14:25.804747  127750 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 09:14:25.817844  127750 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1029 09:14:25.830991  127750 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1029 09:14:25.834853  127750 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:14:25.845029  127750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:14:25.958563  127750 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:14:25.973952  127750 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/test-preload-026333 for IP: 192.168.76.2
	I1029 09:14:25.973974  127750 certs.go:195] generating shared ca certs ...
	I1029 09:14:25.973990  127750 certs.go:227] acquiring lock for ca certs: {Name:mk715611293338c39436fe9072c5ce0c2fb993a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:14:25.974189  127750 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key
	I1029 09:14:25.974263  127750 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key
	I1029 09:14:25.974278  127750 certs.go:257] generating profile certs ...
	I1029 09:14:25.974383  127750 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/test-preload-026333/client.key
	I1029 09:14:25.974481  127750 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/test-preload-026333/apiserver.key.94a5d5ab
	I1029 09:14:25.974561  127750 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/test-preload-026333/proxy-client.key
	I1029 09:14:25.974694  127750 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem (1338 bytes)
	W1029 09:14:25.974745  127750 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550_empty.pem, impossibly tiny 0 bytes
	I1029 09:14:25.974760  127750 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem (1679 bytes)
	I1029 09:14:25.974792  127750 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem (1082 bytes)
	I1029 09:14:25.974847  127750 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem (1123 bytes)
	I1029 09:14:25.974879  127750 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem (1679 bytes)
	I1029 09:14:25.974943  127750 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem (1708 bytes)
	I1029 09:14:25.983287  127750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 09:14:26.000865  127750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 09:14:26.024965  127750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 09:14:26.043179  127750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1029 09:14:26.064040  127750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/test-preload-026333/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1029 09:14:26.085305  127750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/test-preload-026333/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 09:14:26.106608  127750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/test-preload-026333/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 09:14:26.133437  127750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/test-preload-026333/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1029 09:14:26.156156  127750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 09:14:26.177677  127750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem --> /usr/share/ca-certificates/4550.pem (1338 bytes)
	I1029 09:14:26.200120  127750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /usr/share/ca-certificates/45502.pem (1708 bytes)
	I1029 09:14:26.219305  127750 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 09:14:26.232675  127750 ssh_runner.go:195] Run: openssl version
	I1029 09:14:26.239070  127750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 09:14:26.247632  127750 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:14:26.251555  127750 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:14:26.251638  127750 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:14:26.292144  127750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 09:14:26.300143  127750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4550.pem && ln -fs /usr/share/ca-certificates/4550.pem /etc/ssl/certs/4550.pem"
	I1029 09:14:26.308505  127750 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4550.pem
	I1029 09:14:26.312337  127750 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:27 /usr/share/ca-certificates/4550.pem
	I1029 09:14:26.312415  127750 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4550.pem
	I1029 09:14:26.354784  127750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4550.pem /etc/ssl/certs/51391683.0"
	I1029 09:14:26.362842  127750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/45502.pem && ln -fs /usr/share/ca-certificates/45502.pem /etc/ssl/certs/45502.pem"
	I1029 09:14:26.371135  127750 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/45502.pem
	I1029 09:14:26.375165  127750 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:27 /usr/share/ca-certificates/45502.pem
	I1029 09:14:26.375271  127750 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/45502.pem
	I1029 09:14:26.416060  127750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/45502.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 09:14:26.424214  127750 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 09:14:26.427881  127750 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1029 09:14:26.468796  127750 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1029 09:14:26.510321  127750 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1029 09:14:26.551281  127750 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1029 09:14:26.599093  127750 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1029 09:14:26.644999  127750 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1029 09:14:26.694616  127750 kubeadm.go:401] StartCluster: {Name:test-preload-026333 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-026333 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:14:26.694748  127750 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 09:14:26.694857  127750 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 09:14:26.751493  127750 cri.go:89] found id: "1a06060e119e901d88b4d94b289efe0dbe69287388960cb1454beaca34c041d7"
	I1029 09:14:26.751559  127750 cri.go:89] found id: "9b4d431126cf885bbec493c55f7661ffa6441f9ed245ca08010fc77559325294"
	I1029 09:14:26.751583  127750 cri.go:89] found id: ""
	I1029 09:14:26.751680  127750 ssh_runner.go:195] Run: sudo runc list -f json
	W1029 09:14:26.779972  127750 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:14:26Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:14:26.780095  127750 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 09:14:26.798147  127750 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1029 09:14:26.798207  127750 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1029 09:14:26.798293  127750 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1029 09:14:26.812568  127750 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1029 09:14:26.813073  127750 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-026333" does not appear in /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:14:26.813233  127750 kubeconfig.go:62] /home/jenkins/minikube-integration/21800-2763/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-026333" cluster setting kubeconfig missing "test-preload-026333" context setting]
	I1029 09:14:26.813574  127750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:14:26.814185  127750 kapi.go:59] client config for test-preload-026333: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/test-preload-026333/client.crt", KeyFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/test-preload-026333/client.key", CAFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1029 09:14:26.814749  127750 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1029 09:14:26.814918  127750 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1029 09:14:26.814944  127750 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1029 09:14:26.814970  127750 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1029 09:14:26.815001  127750 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1029 09:14:26.815354  127750 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1029 09:14:26.824842  127750 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1029 09:14:26.824916  127750 kubeadm.go:602] duration metric: took 26.689149ms to restartPrimaryControlPlane
	I1029 09:14:26.824940  127750 kubeadm.go:403] duration metric: took 130.332763ms to StartCluster
	I1029 09:14:26.824983  127750 settings.go:142] acquiring lock: {Name:mkeba1c0d8e3656c2a05b6b6f1f81184498df216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:14:26.825061  127750 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:14:26.825746  127750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:14:26.826030  127750 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:14:26.826369  127750 config.go:182] Loaded profile config "test-preload-026333": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1029 09:14:26.826437  127750 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 09:14:26.826550  127750 addons.go:70] Setting storage-provisioner=true in profile "test-preload-026333"
	I1029 09:14:26.826590  127750 addons.go:239] Setting addon storage-provisioner=true in "test-preload-026333"
	W1029 09:14:26.826631  127750 addons.go:248] addon storage-provisioner should already be in state true
	I1029 09:14:26.826674  127750 host.go:66] Checking if "test-preload-026333" exists ...
	I1029 09:14:26.826562  127750 addons.go:70] Setting default-storageclass=true in profile "test-preload-026333"
	I1029 09:14:26.826751  127750 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-026333"
	I1029 09:14:26.827063  127750 cli_runner.go:164] Run: docker container inspect test-preload-026333 --format={{.State.Status}}
	I1029 09:14:26.827416  127750 cli_runner.go:164] Run: docker container inspect test-preload-026333 --format={{.State.Status}}
	I1029 09:14:26.830033  127750 out.go:179] * Verifying Kubernetes components...
	I1029 09:14:26.833573  127750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:14:26.854783  127750 kapi.go:59] client config for test-preload-026333: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/test-preload-026333/client.crt", KeyFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/test-preload-026333/client.key", CAFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1029 09:14:26.855121  127750 addons.go:239] Setting addon default-storageclass=true in "test-preload-026333"
	W1029 09:14:26.855132  127750 addons.go:248] addon default-storageclass should already be in state true
	I1029 09:14:26.855157  127750 host.go:66] Checking if "test-preload-026333" exists ...
	I1029 09:14:26.855572  127750 cli_runner.go:164] Run: docker container inspect test-preload-026333 --format={{.State.Status}}
	I1029 09:14:26.869771  127750 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 09:14:26.874492  127750 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:14:26.874521  127750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1029 09:14:26.874585  127750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-026333
	I1029 09:14:26.903033  127750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/test-preload-026333/id_rsa Username:docker}
	I1029 09:14:26.911135  127750 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1029 09:14:26.911162  127750 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1029 09:14:26.911221  127750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-026333
	I1029 09:14:26.943387  127750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/test-preload-026333/id_rsa Username:docker}
	I1029 09:14:27.057468  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:14:27.092814  127750 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:14:27.170808  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1029 09:14:27.217687  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:27.217727  127750 retry.go:31] will retry after 187.869988ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:27.217779  127750 node_ready.go:35] waiting up to 6m0s for node "test-preload-026333" to be "Ready" ...
	W1029 09:14:27.259582  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:27.259615  127750 retry.go:31] will retry after 362.325871ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:27.405963  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1029 09:14:27.499094  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:27.499132  127750 retry.go:31] will retry after 538.420375ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:27.622504  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1029 09:14:27.707298  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:27.707331  127750 retry.go:31] will retry after 475.010933ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:28.038520  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1029 09:14:28.130535  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:28.130575  127750 retry.go:31] will retry after 720.921575ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:28.182833  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1029 09:14:28.270019  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:28.270064  127750 retry.go:31] will retry after 329.492424ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:28.600611  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1029 09:14:28.673970  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:28.674003  127750 retry.go:31] will retry after 446.185521ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:28.851706  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1029 09:14:28.918415  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:28.918445  127750 retry.go:31] will retry after 693.450795ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:29.120772  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1029 09:14:29.183677  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:29.183707  127750 retry.go:31] will retry after 1.019782998s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 09:14:29.219356  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:14:29.612937  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1029 09:14:29.682699  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:29.682732  127750 retry.go:31] will retry after 1.493576921s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:30.203846  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1029 09:14:30.275812  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:30.275841  127750 retry.go:31] will retry after 2.815493883s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:31.177162  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1029 09:14:31.241955  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:31.241989  127750 retry.go:31] will retry after 1.708252829s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 09:14:31.718878  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:14:32.950431  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1029 09:14:33.018251  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:33.018285  127750 retry.go:31] will retry after 2.741915957s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:33.092467  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1029 09:14:33.154991  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:33.155019  127750 retry.go:31] will retry after 2.702777697s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 09:14:34.218369  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:14:35.760763  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1029 09:14:35.824983  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:35.825023  127750 retry.go:31] will retry after 5.23592771s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:35.858179  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1029 09:14:35.921695  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:35.921733  127750 retry.go:31] will retry after 6.187569302s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 09:14:36.718285  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:14:39.218387  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:14:41.062012  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1029 09:14:41.149567  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:41.149596  127750 retry.go:31] will retry after 8.520776941s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 09:14:41.219136  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:14:42.109542  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1029 09:14:42.183204  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:42.183239  127750 retry.go:31] will retry after 8.000217858s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 09:14:43.719058  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:14:46.218864  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:14:48.219306  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:14:49.670576  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1029 09:14:49.734362  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:49.734391  127750 retry.go:31] will retry after 14.323518178s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:50.183695  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1029 09:14:50.245617  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:14:50.245649  127750 retry.go:31] will retry after 11.34625372s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 09:14:50.718334  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:14:52.718385  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:14:55.218365  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:14:57.718395  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:14:59.719239  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:15:01.593011  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1029 09:15:01.658522  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:15:01.658556  127750 retry.go:31] will retry after 20.875822493s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 09:15:02.219228  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:15:04.058926  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1029 09:15:04.126011  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:15:04.126042  127750 retry.go:31] will retry after 13.431262762s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 09:15:04.718321  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:07.218385  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:09.718305  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:11.719267  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:14.218278  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:16.218336  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:15:17.557989  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1029 09:15:17.629267  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:15:17.629305  127750 retry.go:31] will retry after 26.163170309s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 09:15:18.218811  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:20.219161  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:22.219327  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:15:22.534605  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1029 09:15:22.598106  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:15:22.598139  127750 retry.go:31] will retry after 27.115132993s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 09:15:24.718343  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:27.218313  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:29.719253  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:32.219077  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:34.718329  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:36.718470  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:39.218357  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:41.718279  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:15:43.793547  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1029 09:15:43.883382  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:15:43.883409  127750 retry.go:31] will retry after 25.650219697s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 09:15:44.218287  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:46.218502  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:48.219050  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:15:49.713486  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1029 09:15:49.775749  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 09:15:49.775781  127750 retry.go:31] will retry after 17.846460358s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 09:15:50.718380  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:52.719143  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:54.719284  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:57.218283  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:15:59.219263  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:01.718373  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:04.219283  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:06.718274  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:16:07.622899  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1029 09:16:07.685256  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 09:16:07.685350  127750 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1029 09:16:08.718361  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:16:09.533805  127750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1029 09:16:09.601928  127750 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 09:16:09.602026  127750 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1029 09:16:09.605063  127750 out.go:179] * Enabled addons: 
	I1029 09:16:09.607875  127750 addons.go:515] duration metric: took 1m42.781416348s for enable addons: enabled=[]
	W1029 09:16:11.218349  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:13.219301  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:15.718315  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:18.218309  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:20.719345  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:23.218393  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:25.719284  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:28.219258  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:30.718276  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:32.718331  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:35.218266  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:37.218330  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:39.718317  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:42.218368  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:44.718969  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:46.719224  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:49.218412  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:51.218576  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:53.718371  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:55.718415  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:16:58.218341  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:00.218497  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:02.718415  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:05.218481  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:07.718431  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:10.218995  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:12.718316  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:15.218265  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:17.218389  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:19.219221  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:21.718367  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:24.218282  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:26.718335  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:29.219269  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:31.718278  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:34.218326  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:36.218403  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:38.718413  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:40.719296  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:43.218320  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:45.218412  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:47.218676  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:49.718426  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:52.218358  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:54.718263  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:56.718393  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:17:58.718600  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:01.218540  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:03.719323  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:06.218304  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:08.719322  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:11.218430  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:13.718284  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:15.718399  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:18.218368  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:20.218439  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:22.719092  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:24.719232  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:27.218951  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:29.718327  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:32.219280  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:34.719254  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:37.218279  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:39.219330  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:41.719094  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:43.719335  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:46.218865  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:48.718213  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:50.718325  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:53.218247  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:55.218314  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:18:57.718311  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:00.218296  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:02.218338  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:04.718347  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:07.218317  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:09.719080  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:12.218861  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:14.219013  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:16.219175  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:18.718338  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:21.218430  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:23.718288  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:26.218332  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:28.218385  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:30.718983  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:32.719351  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:35.219281  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:37.718474  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:40.218289  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:42.218500  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:44.718272  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:46.718422  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:49.218363  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:51.218552  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:53.718205  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:55.718302  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:19:57.719301  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:20:00.218877  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:20:02.718370  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:20:05.218362  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:20:07.718340  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:20:09.718391  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:20:12.218422  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:20:14.718334  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:20:17.218415  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:20:19.718447  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:20:22.218354  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:20:24.718313  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	W1029 09:20:26.718375  127750 node_ready.go:55] error getting node "test-preload-026333" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-026333": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:20:27.217978  127750 node_ready.go:38] duration metric: took 6m0.000169472s for node "test-preload-026333" to be "Ready" ...
	I1029 09:20:27.221078  127750 out.go:203] 
	W1029 09:20:27.223935  127750 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1029 09:20:27.223961  127750 out.go:285] * 
	W1029 09:20:27.226118  127750 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 09:20:27.228915  127750 out.go:203] 
	
	
	==> CRI-O <==
	Oct 29 09:16:57 test-preload-026333 conmon[1204]: conmon ad5aece07e3a81d70cf2 <ninfo>: container 1206 exited with status 1
	Oct 29 09:16:57 test-preload-026333 crio[638]: time="2025-10-29T09:16:57.488533125Z" level=info msg="Removing container: 7b10304bfd2afd6bc34d5020a2659811734f3eab1dd21dcf0360587a7b691999" id=a8f4bcd0-e726-4c74-bec7-8d6d13ce179e name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:16:57 test-preload-026333 crio[638]: time="2025-10-29T09:16:57.497484527Z" level=info msg="Error loading conmon cgroup of container 7b10304bfd2afd6bc34d5020a2659811734f3eab1dd21dcf0360587a7b691999: cgroup deleted" id=a8f4bcd0-e726-4c74-bec7-8d6d13ce179e name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:16:57 test-preload-026333 crio[638]: time="2025-10-29T09:16:57.500619025Z" level=info msg="Removed container 7b10304bfd2afd6bc34d5020a2659811734f3eab1dd21dcf0360587a7b691999: kube-system/kube-controller-manager-test-preload-026333/kube-controller-manager" id=a8f4bcd0-e726-4c74-bec7-8d6d13ce179e name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:17:40 test-preload-026333 crio[638]: time="2025-10-29T09:17:40.394753342Z" level=info msg="createCtr: deleting container 0e58e0a8139fa388617f11e2986cbdb38413471d3eebdc6ad508ad4c80739dcc from storage" id=03278585-b428-4c8b-9f35-033ecb129ecf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:17:40 test-preload-026333 crio[638]: time="2025-10-29T09:17:40.395057625Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/594384148e7acbf027939ffb006c099e24ffd14e28bdaba486cde345508bab74/merged\": directory not empty" id=03278585-b428-4c8b-9f35-033ecb129ecf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:17:40 test-preload-026333 crio[638]: time="2025-10-29T09:17:40.416939924Z" level=info msg="createCtr: deleting container d878c188474db7ad301b1b75b3b95c2f4e555baa558cc2c7180f37c404f8d543 from storage" id=83a08399-dbf7-49e7-a50d-2452b620ae5f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:17:40 test-preload-026333 crio[638]: time="2025-10-29T09:17:40.417229797Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/918e105a37db2180c880f83cff94bc7fcf291ba2a2d974302f20c67fa681b4ab/merged\": directory not empty" id=83a08399-dbf7-49e7-a50d-2452b620ae5f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:18:24 test-preload-026333 crio[638]: time="2025-10-29T09:18:24.181900198Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.32.0" id=3589dae9-cbeb-444a-80d0-b925c5e7872c name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:18:24 test-preload-026333 crio[638]: time="2025-10-29T09:18:24.182830595Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.32.0" id=afeb9ec8-1333-4fd4-a3f9-662604df0f3b name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:18:24 test-preload-026333 crio[638]: time="2025-10-29T09:18:24.183683986Z" level=info msg="Creating container: kube-system/kube-controller-manager-test-preload-026333/kube-controller-manager" id=b6fd0594-6924-4565-910b-d6e3f373d2ca name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:18:24 test-preload-026333 crio[638]: time="2025-10-29T09:18:24.183783401Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:18:24 test-preload-026333 crio[638]: time="2025-10-29T09:18:24.188335037Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:18:24 test-preload-026333 crio[638]: time="2025-10-29T09:18:24.18898379Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:18:24 test-preload-026333 crio[638]: time="2025-10-29T09:18:24.206548934Z" level=info msg="Created container 03e7a47038c77f4f06bac0b76bfb76ce1baf798bca735da6da0e3b61d0628d55: kube-system/kube-controller-manager-test-preload-026333/kube-controller-manager" id=b6fd0594-6924-4565-910b-d6e3f373d2ca name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:18:24 test-preload-026333 crio[638]: time="2025-10-29T09:18:24.207137699Z" level=info msg="Starting container: 03e7a47038c77f4f06bac0b76bfb76ce1baf798bca735da6da0e3b61d0628d55" id=d1d47e7b-3d47-4352-af99-928c1824296b name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:18:24 test-preload-026333 crio[638]: time="2025-10-29T09:18:24.211391781Z" level=info msg="Started container" PID=1222 containerID=03e7a47038c77f4f06bac0b76bfb76ce1baf798bca735da6da0e3b61d0628d55 description=kube-system/kube-controller-manager-test-preload-026333/kube-controller-manager id=d1d47e7b-3d47-4352-af99-928c1824296b name=/runtime.v1.RuntimeService/StartContainer sandboxID=8af8ab3cbb0930a40eab48b10f2cd3949dd272f808ee1f2498a6efc5c987d18c
	Oct 29 09:18:36 test-preload-026333 conmon[1220]: conmon 03e7a47038c77f4f06ba <ninfo>: container 1222 exited with status 1
	Oct 29 09:18:36 test-preload-026333 crio[638]: time="2025-10-29T09:18:36.662935386Z" level=info msg="Removing container: ad5aece07e3a81d70cf276333cdf1834bc8abc2882e5e2c92589b149c636b2d3" id=abad4925-6a45-44f2-81ff-053461a19d53 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:18:36 test-preload-026333 crio[638]: time="2025-10-29T09:18:36.671209038Z" level=info msg="Error loading conmon cgroup of container ad5aece07e3a81d70cf276333cdf1834bc8abc2882e5e2c92589b149c636b2d3: cgroup deleted" id=abad4925-6a45-44f2-81ff-053461a19d53 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:18:36 test-preload-026333 crio[638]: time="2025-10-29T09:18:36.674193897Z" level=info msg="Removed container ad5aece07e3a81d70cf276333cdf1834bc8abc2882e5e2c92589b149c636b2d3: kube-system/kube-controller-manager-test-preload-026333/kube-controller-manager" id=abad4925-6a45-44f2-81ff-053461a19d53 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:19:17 test-preload-026333 crio[638]: time="2025-10-29T09:19:17.705027577Z" level=info msg="createCtr: deleting container 0e58e0a8139fa388617f11e2986cbdb38413471d3eebdc6ad508ad4c80739dcc from storage" id=03278585-b428-4c8b-9f35-033ecb129ecf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:19:17 test-preload-026333 crio[638]: time="2025-10-29T09:19:17.705362424Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/594384148e7acbf027939ffb006c099e24ffd14e28bdaba486cde345508bab74/merged\": directory not empty" id=03278585-b428-4c8b-9f35-033ecb129ecf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:19:17 test-preload-026333 crio[638]: time="2025-10-29T09:19:17.727246972Z" level=info msg="createCtr: deleting container d878c188474db7ad301b1b75b3b95c2f4e555baa558cc2c7180f37c404f8d543 from storage" id=83a08399-dbf7-49e7-a50d-2452b620ae5f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:19:17 test-preload-026333 crio[638]: time="2025-10-29T09:19:17.727514527Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/918e105a37db2180c880f83cff94bc7fcf291ba2a2d974302f20c67fa681b4ab/merged\": directory not empty" id=83a08399-dbf7-49e7-a50d-2452b620ae5f name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                           NAMESPACE
	03e7a47038c77       a8d049396f6b8f19df1e3f6b132cb1b9696806ddf19808f97305dd16fce9450c   2 minutes ago       Exited              kube-controller-manager   6                   8af8ab3cbb093       kube-controller-manager-test-preload-026333   kube-system
	1a06060e119e9       7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82   6 minutes ago       Running             etcd                      1                   ae4c3c798159c       etcd-test-preload-026333                      kube-system
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct29 08:43] overlayfs: idmapped layers are currently not supported
	[Oct29 08:45] overlayfs: idmapped layers are currently not supported
	[Oct29 08:46] overlayfs: idmapped layers are currently not supported
	[Oct29 08:47] overlayfs: idmapped layers are currently not supported
	[  +4.220383] overlayfs: idmapped layers are currently not supported
	[Oct29 08:48] overlayfs: idmapped layers are currently not supported
	[Oct29 08:56] overlayfs: idmapped layers are currently not supported
	[  +3.225081] overlayfs: idmapped layers are currently not supported
	[Oct29 08:57] overlayfs: idmapped layers are currently not supported
	[Oct29 08:58] overlayfs: idmapped layers are currently not supported
	[Oct29 08:59] overlayfs: idmapped layers are currently not supported
	[Oct29 09:04] overlayfs: idmapped layers are currently not supported
	[Oct29 09:05] overlayfs: idmapped layers are currently not supported
	[Oct29 09:06] overlayfs: idmapped layers are currently not supported
	[Oct29 09:07] overlayfs: idmapped layers are currently not supported
	[Oct29 09:08] overlayfs: idmapped layers are currently not supported
	[Oct29 09:10] overlayfs: idmapped layers are currently not supported
	[ +24.018500] overlayfs: idmapped layers are currently not supported
	[  +4.070732] overlayfs: idmapped layers are currently not supported
	[Oct29 09:11] overlayfs: idmapped layers are currently not supported
	[ +18.424492] overlayfs: idmapped layers are currently not supported
	[  +4.342269] hrtimer: interrupt took 2289025 ns
	[Oct29 09:12] overlayfs: idmapped layers are currently not supported
	[Oct29 09:13] overlayfs: idmapped layers are currently not supported
	[Oct29 09:14] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1a06060e119e901d88b4d94b289efe0dbe69287388960cb1454beaca34c041d7] <==
	{"level":"info","ts":"2025-10-29T09:14:26.935571Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-29T09:14:26.935855Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-29T09:14:26.936087Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-29T09:14:26.952934Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-29T09:14:26.954438Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-29T09:14:26.954683Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-29T09:14:26.954713Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-29T09:14:26.954807Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-29T09:14:26.954815Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-29T09:14:28.163664Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-29T09:14:28.163711Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-29T09:14:28.163743Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-29T09:14:28.163755Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-29T09:14:28.163762Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-29T09:14:28.163772Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-29T09:14:28.163781Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-29T09:14:28.165014Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:test-preload-026333 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-29T09:14:28.165177Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-29T09:14:28.165420Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-29T09:14:28.165513Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-29T09:14:28.165524Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-29T09:14:28.165965Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-29T09:14:28.166609Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-29T09:14:28.166980Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-29T09:14:28.167567Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 09:20:28 up  1:02,  0 user,  load average: 0.07, 0.56, 1.21
	Linux test-preload-026333 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-controller-manager [03e7a47038c77f4f06bac0b76bfb76ce1baf798bca735da6da0e3b61d0628d55] <==
	I1029 09:18:26.222556       1 serving.go:386] Generated self-signed cert in-memory
	I1029 09:18:26.574536       1 controllermanager.go:185] "Starting" version="v1.32.0"
	I1029 09:18:26.574564       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:18:26.575970       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1029 09:18:26.576139       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1029 09:18:26.576275       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1029 09:18:26.576361       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1029 09:18:36.578395       1 controllermanager.go:230] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.76.2:8443/healthz\": dial tcp 192.168.76.2:8443: connect: connection refused"
	
	
	==> kubelet <==
	Oct 29 09:20:05 test-preload-026333 kubelet[763]: E1029 09:20:05.180971     763 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"test-preload-026333\" not found" node="test-preload-026333"
	Oct 29 09:20:06 test-preload-026333 kubelet[763]: E1029 09:20:06.205052     763 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"test-preload-026333\" not found"
	Oct 29 09:20:07 test-preload-026333 kubelet[763]: W1029 09:20:07.105979     763 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Oct 29 09:20:07 test-preload-026333 kubelet[763]: E1029 09:20:07.106061     763 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError"
	Oct 29 09:20:07 test-preload-026333 kubelet[763]: E1029 09:20:07.770389     763 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/test-preload-026333?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Oct 29 09:20:08 test-preload-026333 kubelet[763]: I1029 09:20:08.022031     763 kubelet_node_status.go:76] "Attempting to register node" node="test-preload-026333"
	Oct 29 09:20:08 test-preload-026333 kubelet[763]: E1029 09:20:08.022492     763 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="test-preload-026333"
	Oct 29 09:20:10 test-preload-026333 kubelet[763]: E1029 09:20:10.181531     763 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"test-preload-026333\" not found" node="test-preload-026333"
	Oct 29 09:20:10 test-preload-026333 kubelet[763]: I1029 09:20:10.181640     763 scope.go:117] "RemoveContainer" containerID="03e7a47038c77f4f06bac0b76bfb76ce1baf798bca735da6da0e3b61d0628d55"
	Oct 29 09:20:10 test-preload-026333 kubelet[763]: E1029 09:20:10.181797     763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-test-preload-026333_kube-system(c3eb177bcc960400a69b271943b0973e)\"" pod="kube-system/kube-controller-manager-test-preload-026333" podUID="c3eb177bcc960400a69b271943b0973e"
	Oct 29 09:20:11 test-preload-026333 kubelet[763]: E1029 09:20:11.518017     763 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.76.2:8443: connect: connection refused" event="&Event{ObjectMeta:{test-preload-026333.1872eb6efc43cb65  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:test-preload-026333,UID:test-preload-026333,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node test-preload-026333 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:test-preload-026333,},FirstTimestamp:2025-10-29 09:14:26.166516581 +0000 UTC m=+0.190813157,LastTimestamp:2025-10-29 09:14:26.166516581 +0000 UTC m=+0.190813157,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstan
ce:test-preload-026333,}"
	Oct 29 09:20:14 test-preload-026333 kubelet[763]: E1029 09:20:14.772115     763 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/test-preload-026333?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Oct 29 09:20:15 test-preload-026333 kubelet[763]: I1029 09:20:15.024731     763 kubelet_node_status.go:76] "Attempting to register node" node="test-preload-026333"
	Oct 29 09:20:15 test-preload-026333 kubelet[763]: E1029 09:20:15.025234     763 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="test-preload-026333"
	Oct 29 09:20:16 test-preload-026333 kubelet[763]: E1029 09:20:16.205425     763 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"test-preload-026333\" not found"
	Oct 29 09:20:21 test-preload-026333 kubelet[763]: E1029 09:20:21.181093     763 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"test-preload-026333\" not found" node="test-preload-026333"
	Oct 29 09:20:21 test-preload-026333 kubelet[763]: I1029 09:20:21.181205     763 scope.go:117] "RemoveContainer" containerID="03e7a47038c77f4f06bac0b76bfb76ce1baf798bca735da6da0e3b61d0628d55"
	Oct 29 09:20:21 test-preload-026333 kubelet[763]: E1029 09:20:21.181378     763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-test-preload-026333_kube-system(c3eb177bcc960400a69b271943b0973e)\"" pod="kube-system/kube-controller-manager-test-preload-026333" podUID="c3eb177bcc960400a69b271943b0973e"
	Oct 29 09:20:21 test-preload-026333 kubelet[763]: E1029 09:20:21.518789     763 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.76.2:8443: connect: connection refused" event="&Event{ObjectMeta:{test-preload-026333.1872eb6efc43cb65  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:test-preload-026333,UID:test-preload-026333,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node test-preload-026333 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:test-preload-026333,},FirstTimestamp:2025-10-29 09:14:26.166516581 +0000 UTC m=+0.190813157,LastTimestamp:2025-10-29 09:14:26.166516581 +0000 UTC m=+0.190813157,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstan
ce:test-preload-026333,}"
	Oct 29 09:20:21 test-preload-026333 kubelet[763]: E1029 09:20:21.773002     763 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/test-preload-026333?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Oct 29 09:20:22 test-preload-026333 kubelet[763]: I1029 09:20:22.026714     763 kubelet_node_status.go:76] "Attempting to register node" node="test-preload-026333"
	Oct 29 09:20:22 test-preload-026333 kubelet[763]: E1029 09:20:22.027170     763 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="test-preload-026333"
	Oct 29 09:20:26 test-preload-026333 kubelet[763]: W1029 09:20:26.132492     763 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Oct 29 09:20:26 test-preload-026333 kubelet[763]: E1029 09:20:26.132575     763 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError"
	Oct 29 09:20:26 test-preload-026333 kubelet[763]: E1029 09:20:26.205950     763 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"test-preload-026333\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p test-preload-026333 -n test-preload-026333
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p test-preload-026333 -n test-preload-026333: exit status 2 (306.304704ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "test-preload-026333" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-026333" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-026333
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-026333: (2.46632139s)
--- FAIL: TestPreload (447.61s)

                                                
                                    
x
+
TestPause/serial/Pause (6.54s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-598473 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-598473 --alsologtostderr -v=5: exit status 80 (1.772753949s)

                                                
                                                
-- stdout --
	* Pausing node pause-598473 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 09:28:47.369062  166758 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:28:47.369828  166758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:28:47.369846  166758 out.go:374] Setting ErrFile to fd 2...
	I1029 09:28:47.369852  166758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:28:47.370107  166758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 09:28:47.370452  166758 out.go:368] Setting JSON to false
	I1029 09:28:47.370482  166758 mustload.go:66] Loading cluster: pause-598473
	I1029 09:28:47.370897  166758 config.go:182] Loaded profile config "pause-598473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:28:47.371336  166758 cli_runner.go:164] Run: docker container inspect pause-598473 --format={{.State.Status}}
	I1029 09:28:47.389985  166758 host.go:66] Checking if "pause-598473" exists ...
	I1029 09:28:47.390337  166758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:28:47.449233  166758 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-29 09:28:47.439214318 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:28:47.449853  166758 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-598473 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1029 09:28:47.453445  166758 out.go:179] * Pausing node pause-598473 ... 
	I1029 09:28:47.457075  166758 host.go:66] Checking if "pause-598473" exists ...
	I1029 09:28:47.457554  166758 ssh_runner.go:195] Run: systemctl --version
	I1029 09:28:47.457603  166758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598473
	I1029 09:28:47.477963  166758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/pause-598473/id_rsa Username:docker}
	I1029 09:28:47.583079  166758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:28:47.596739  166758 pause.go:52] kubelet running: true
	I1029 09:28:47.596820  166758 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:28:47.835619  166758 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:28:47.835721  166758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:28:47.908949  166758 cri.go:89] found id: "7dda4d8e9247e4c48e0541c08b16da24318bbccc701f139085f1241779fd5f7c"
	I1029 09:28:47.909019  166758 cri.go:89] found id: "a40e8b1aec6cf4ffb89caaf694a74a0457d97ce61bc68f6103e3910054c0228b"
	I1029 09:28:47.909041  166758 cri.go:89] found id: "99ffc1b8e15dbc056c0325ee48e8afa68322220323c386b6bedb1c4a2ee5e455"
	I1029 09:28:47.909052  166758 cri.go:89] found id: "e240f2f193b4ad7983ce46038c61646263f3c3252a816cfb9eb501adbc10637f"
	I1029 09:28:47.909057  166758 cri.go:89] found id: "b54a7a1da46f4878031777e1d18042b4b4bba0e73a5204cb18e65a98dfe4bf56"
	I1029 09:28:47.909061  166758 cri.go:89] found id: "3f24d3b0d17159e35ec8ac73b72ecde2d13c87c4ce788a4d8aece1755628f8b4"
	I1029 09:28:47.909064  166758 cri.go:89] found id: "7257e194f3686f6d742fd1cd0d89139b8bd26bf067856ef661f029216e99b096"
	I1029 09:28:47.909068  166758 cri.go:89] found id: "d384a6fc7e5d0182de7245d870f2c33ac8483358e6f6ac6db5e18ba13fa7d9d8"
	I1029 09:28:47.909072  166758 cri.go:89] found id: "c190688eeeb79e9c923c6ec33de1858543704894afd50ecdb214f8e4111e298c"
	I1029 09:28:47.909080  166758 cri.go:89] found id: "8747eed7a27641339a70bdff96979ff32978a82c63e891fbc1950d2e489f7e1c"
	I1029 09:28:47.909086  166758 cri.go:89] found id: "ca16a1729c7691f2ea4057d58e8323e20627757b080269568c8ba95cd450fa92"
	I1029 09:28:47.909090  166758 cri.go:89] found id: "8027a2710b597803179f3d65c316ab838f3c511999b97880ec0b1f59441db3cd"
	I1029 09:28:47.909096  166758 cri.go:89] found id: "331659622ea96eb65f7a270e3e1d8f8fa9f2d2eddfd4e3e8bba99a26abb753dd"
	I1029 09:28:47.909100  166758 cri.go:89] found id: "2413d471a2a4209e069fb08050610258c0805f09213c5cf465ffa1c188508fa8"
	I1029 09:28:47.909103  166758 cri.go:89] found id: ""
	I1029 09:28:47.909153  166758 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:28:47.920585  166758 retry.go:31] will retry after 210.893104ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:28:47Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:28:48.131988  166758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:28:48.144729  166758 pause.go:52] kubelet running: false
	I1029 09:28:48.144819  166758 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:28:48.293209  166758 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:28:48.293291  166758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:28:48.359637  166758 cri.go:89] found id: "7dda4d8e9247e4c48e0541c08b16da24318bbccc701f139085f1241779fd5f7c"
	I1029 09:28:48.359662  166758 cri.go:89] found id: "a40e8b1aec6cf4ffb89caaf694a74a0457d97ce61bc68f6103e3910054c0228b"
	I1029 09:28:48.359678  166758 cri.go:89] found id: "99ffc1b8e15dbc056c0325ee48e8afa68322220323c386b6bedb1c4a2ee5e455"
	I1029 09:28:48.359683  166758 cri.go:89] found id: "e240f2f193b4ad7983ce46038c61646263f3c3252a816cfb9eb501adbc10637f"
	I1029 09:28:48.359686  166758 cri.go:89] found id: "b54a7a1da46f4878031777e1d18042b4b4bba0e73a5204cb18e65a98dfe4bf56"
	I1029 09:28:48.359690  166758 cri.go:89] found id: "3f24d3b0d17159e35ec8ac73b72ecde2d13c87c4ce788a4d8aece1755628f8b4"
	I1029 09:28:48.359693  166758 cri.go:89] found id: "7257e194f3686f6d742fd1cd0d89139b8bd26bf067856ef661f029216e99b096"
	I1029 09:28:48.359696  166758 cri.go:89] found id: "d384a6fc7e5d0182de7245d870f2c33ac8483358e6f6ac6db5e18ba13fa7d9d8"
	I1029 09:28:48.359699  166758 cri.go:89] found id: "c190688eeeb79e9c923c6ec33de1858543704894afd50ecdb214f8e4111e298c"
	I1029 09:28:48.359706  166758 cri.go:89] found id: "8747eed7a27641339a70bdff96979ff32978a82c63e891fbc1950d2e489f7e1c"
	I1029 09:28:48.359713  166758 cri.go:89] found id: "ca16a1729c7691f2ea4057d58e8323e20627757b080269568c8ba95cd450fa92"
	I1029 09:28:48.359716  166758 cri.go:89] found id: "8027a2710b597803179f3d65c316ab838f3c511999b97880ec0b1f59441db3cd"
	I1029 09:28:48.359720  166758 cri.go:89] found id: "331659622ea96eb65f7a270e3e1d8f8fa9f2d2eddfd4e3e8bba99a26abb753dd"
	I1029 09:28:48.359726  166758 cri.go:89] found id: "2413d471a2a4209e069fb08050610258c0805f09213c5cf465ffa1c188508fa8"
	I1029 09:28:48.359729  166758 cri.go:89] found id: ""
	I1029 09:28:48.359778  166758 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:28:48.371179  166758 retry.go:31] will retry after 362.994901ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:28:48Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:28:48.734588  166758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:28:48.749556  166758 pause.go:52] kubelet running: false
	I1029 09:28:48.749641  166758 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:28:48.943232  166758 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:28:48.943333  166758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:28:49.040892  166758 cri.go:89] found id: "7dda4d8e9247e4c48e0541c08b16da24318bbccc701f139085f1241779fd5f7c"
	I1029 09:28:49.040918  166758 cri.go:89] found id: "a40e8b1aec6cf4ffb89caaf694a74a0457d97ce61bc68f6103e3910054c0228b"
	I1029 09:28:49.040924  166758 cri.go:89] found id: "99ffc1b8e15dbc056c0325ee48e8afa68322220323c386b6bedb1c4a2ee5e455"
	I1029 09:28:49.040928  166758 cri.go:89] found id: "e240f2f193b4ad7983ce46038c61646263f3c3252a816cfb9eb501adbc10637f"
	I1029 09:28:49.040931  166758 cri.go:89] found id: "b54a7a1da46f4878031777e1d18042b4b4bba0e73a5204cb18e65a98dfe4bf56"
	I1029 09:28:49.040934  166758 cri.go:89] found id: "3f24d3b0d17159e35ec8ac73b72ecde2d13c87c4ce788a4d8aece1755628f8b4"
	I1029 09:28:49.040938  166758 cri.go:89] found id: "7257e194f3686f6d742fd1cd0d89139b8bd26bf067856ef661f029216e99b096"
	I1029 09:28:49.040962  166758 cri.go:89] found id: "d384a6fc7e5d0182de7245d870f2c33ac8483358e6f6ac6db5e18ba13fa7d9d8"
	I1029 09:28:49.040972  166758 cri.go:89] found id: "c190688eeeb79e9c923c6ec33de1858543704894afd50ecdb214f8e4111e298c"
	I1029 09:28:49.040988  166758 cri.go:89] found id: "8747eed7a27641339a70bdff96979ff32978a82c63e891fbc1950d2e489f7e1c"
	I1029 09:28:49.040996  166758 cri.go:89] found id: "ca16a1729c7691f2ea4057d58e8323e20627757b080269568c8ba95cd450fa92"
	I1029 09:28:49.040999  166758 cri.go:89] found id: "8027a2710b597803179f3d65c316ab838f3c511999b97880ec0b1f59441db3cd"
	I1029 09:28:49.041002  166758 cri.go:89] found id: "331659622ea96eb65f7a270e3e1d8f8fa9f2d2eddfd4e3e8bba99a26abb753dd"
	I1029 09:28:49.041011  166758 cri.go:89] found id: "2413d471a2a4209e069fb08050610258c0805f09213c5cf465ffa1c188508fa8"
	I1029 09:28:49.041017  166758 cri.go:89] found id: ""
	I1029 09:28:49.041083  166758 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:28:49.058482  166758 out.go:203] 
	W1029 09:28:49.061332  166758 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:28:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:28:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 09:28:49.061357  166758 out.go:285] * 
	* 
	W1029 09:28:49.066654  166758 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 09:28:49.072576  166758 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-598473 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-598473
helpers_test.go:243: (dbg) docker inspect pause-598473:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "32e47a56cf8ab05cf9994e7c62c233a4ba19d5f1ec55842c95064de193af98a0",
	        "Created": "2025-10-29T09:26:58.905575052Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 160735,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:26:58.978580165Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/32e47a56cf8ab05cf9994e7c62c233a4ba19d5f1ec55842c95064de193af98a0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/32e47a56cf8ab05cf9994e7c62c233a4ba19d5f1ec55842c95064de193af98a0/hostname",
	        "HostsPath": "/var/lib/docker/containers/32e47a56cf8ab05cf9994e7c62c233a4ba19d5f1ec55842c95064de193af98a0/hosts",
	        "LogPath": "/var/lib/docker/containers/32e47a56cf8ab05cf9994e7c62c233a4ba19d5f1ec55842c95064de193af98a0/32e47a56cf8ab05cf9994e7c62c233a4ba19d5f1ec55842c95064de193af98a0-json.log",
	        "Name": "/pause-598473",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-598473:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-598473",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "32e47a56cf8ab05cf9994e7c62c233a4ba19d5f1ec55842c95064de193af98a0",
	                "LowerDir": "/var/lib/docker/overlay2/a09c9712819c6811876968f80ba563aab48fad4af9b923c856d2c8fa5028f37d-init/diff:/var/lib/docker/overlay2/512c003c31c6c889d7180677d0a7d04f9641d651bb7c0f829b95ca7e47c2836c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a09c9712819c6811876968f80ba563aab48fad4af9b923c856d2c8fa5028f37d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a09c9712819c6811876968f80ba563aab48fad4af9b923c856d2c8fa5028f37d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a09c9712819c6811876968f80ba563aab48fad4af9b923c856d2c8fa5028f37d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-598473",
	                "Source": "/var/lib/docker/volumes/pause-598473/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-598473",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-598473",
	                "name.minikube.sigs.k8s.io": "pause-598473",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "30649ff1508c3d864430c8b7e4ba3545026451f470a8c958ad950e7003299a49",
	            "SandboxKey": "/var/run/docker/netns/30649ff1508c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33018"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33019"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33022"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33020"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33021"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-598473": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:02:6a:f9:34:40",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8d4a9121a7d7bab15b4a2c83c57c976ec0f3673a69773eaeac6fff0d9a3417cc",
	                    "EndpointID": "0215fbac5c98bb0f8dd4a60c0e198ffe3c0cca391897078039c39584a5c127fc",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-598473",
	                        "32e47a56cf8a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-598473 -n pause-598473
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-598473 -n pause-598473: exit status 2 (427.681999ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-598473 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-598473 logs -n 25: (1.375632611s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-988770 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-988770       │ jenkins │ v1.37.0 │ 29 Oct 25 09:22 UTC │ 29 Oct 25 09:23 UTC │
	│ start   │ -p missing-upgrade-648122 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-648122    │ jenkins │ v1.32.0 │ 29 Oct 25 09:22 UTC │ 29 Oct 25 09:23 UTC │
	│ start   │ -p NoKubernetes-988770 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-988770       │ jenkins │ v1.37.0 │ 29 Oct 25 09:23 UTC │ 29 Oct 25 09:23 UTC │
	│ delete  │ -p NoKubernetes-988770                                                                                                                   │ NoKubernetes-988770       │ jenkins │ v1.37.0 │ 29 Oct 25 09:23 UTC │ 29 Oct 25 09:23 UTC │
	│ start   │ -p NoKubernetes-988770 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-988770       │ jenkins │ v1.37.0 │ 29 Oct 25 09:23 UTC │ 29 Oct 25 09:23 UTC │
	│ ssh     │ -p NoKubernetes-988770 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-988770       │ jenkins │ v1.37.0 │ 29 Oct 25 09:23 UTC │                     │
	│ stop    │ -p NoKubernetes-988770                                                                                                                   │ NoKubernetes-988770       │ jenkins │ v1.37.0 │ 29 Oct 25 09:23 UTC │ 29 Oct 25 09:23 UTC │
	│ start   │ -p NoKubernetes-988770 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-988770       │ jenkins │ v1.37.0 │ 29 Oct 25 09:23 UTC │ 29 Oct 25 09:23 UTC │
	│ start   │ -p missing-upgrade-648122 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-648122    │ jenkins │ v1.37.0 │ 29 Oct 25 09:23 UTC │ 29 Oct 25 09:24 UTC │
	│ ssh     │ -p NoKubernetes-988770 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-988770       │ jenkins │ v1.37.0 │ 29 Oct 25 09:23 UTC │                     │
	│ delete  │ -p NoKubernetes-988770                                                                                                                   │ NoKubernetes-988770       │ jenkins │ v1.37.0 │ 29 Oct 25 09:23 UTC │ 29 Oct 25 09:23 UTC │
	│ start   │ -p kubernetes-upgrade-392485 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-392485 │ jenkins │ v1.37.0 │ 29 Oct 25 09:23 UTC │ 29 Oct 25 09:24 UTC │
	│ stop    │ -p kubernetes-upgrade-392485                                                                                                             │ kubernetes-upgrade-392485 │ jenkins │ v1.37.0 │ 29 Oct 25 09:24 UTC │ 29 Oct 25 09:24 UTC │
	│ start   │ -p kubernetes-upgrade-392485 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-392485 │ jenkins │ v1.37.0 │ 29 Oct 25 09:24 UTC │                     │
	│ delete  │ -p missing-upgrade-648122                                                                                                                │ missing-upgrade-648122    │ jenkins │ v1.37.0 │ 29 Oct 25 09:24 UTC │ 29 Oct 25 09:24 UTC │
	│ start   │ -p stopped-upgrade-802711 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-802711    │ jenkins │ v1.32.0 │ 29 Oct 25 09:25 UTC │ 29 Oct 25 09:25 UTC │
	│ stop    │ stopped-upgrade-802711 stop                                                                                                              │ stopped-upgrade-802711    │ jenkins │ v1.32.0 │ 29 Oct 25 09:25 UTC │ 29 Oct 25 09:25 UTC │
	│ start   │ -p stopped-upgrade-802711 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-802711    │ jenkins │ v1.37.0 │ 29 Oct 25 09:25 UTC │ 29 Oct 25 09:25 UTC │
	│ delete  │ -p stopped-upgrade-802711                                                                                                                │ stopped-upgrade-802711    │ jenkins │ v1.37.0 │ 29 Oct 25 09:25 UTC │ 29 Oct 25 09:25 UTC │
	│ start   │ -p running-upgrade-214661 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-214661    │ jenkins │ v1.32.0 │ 29 Oct 25 09:25 UTC │ 29 Oct 25 09:26 UTC │
	│ start   │ -p running-upgrade-214661 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-214661    │ jenkins │ v1.37.0 │ 29 Oct 25 09:26 UTC │ 29 Oct 25 09:26 UTC │
	│ delete  │ -p running-upgrade-214661                                                                                                                │ running-upgrade-214661    │ jenkins │ v1.37.0 │ 29 Oct 25 09:26 UTC │ 29 Oct 25 09:26 UTC │
	│ start   │ -p pause-598473 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-598473              │ jenkins │ v1.37.0 │ 29 Oct 25 09:26 UTC │ 29 Oct 25 09:28 UTC │
	│ start   │ -p pause-598473 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-598473              │ jenkins │ v1.37.0 │ 29 Oct 25 09:28 UTC │ 29 Oct 25 09:28 UTC │
	│ pause   │ -p pause-598473 --alsologtostderr -v=5                                                                                                   │ pause-598473              │ jenkins │ v1.37.0 │ 29 Oct 25 09:28 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:28:18
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:28:18.469406  164763 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:28:18.469594  164763 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:28:18.469626  164763 out.go:374] Setting ErrFile to fd 2...
	I1029 09:28:18.469649  164763 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:28:18.469956  164763 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 09:28:18.470471  164763 out.go:368] Setting JSON to false
	I1029 09:28:18.471467  164763 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4250,"bootTime":1761725848,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1029 09:28:18.471566  164763 start.go:143] virtualization:  
	I1029 09:28:18.475253  164763 out.go:179] * [pause-598473] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1029 09:28:18.478204  164763 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:28:18.478330  164763 notify.go:221] Checking for updates...
	I1029 09:28:18.483960  164763 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:28:18.486935  164763 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:28:18.489841  164763 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	I1029 09:28:18.492846  164763 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1029 09:28:18.495722  164763 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:28:18.499901  164763 config.go:182] Loaded profile config "pause-598473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:28:18.500620  164763 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:28:18.534465  164763 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1029 09:28:18.534576  164763 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:28:18.604040  164763 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-29 09:28:18.594181291 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:28:18.604156  164763 docker.go:319] overlay module found
	I1029 09:28:18.607457  164763 out.go:179] * Using the docker driver based on existing profile
	I1029 09:28:18.610737  164763 start.go:309] selected driver: docker
	I1029 09:28:18.610772  164763 start.go:930] validating driver "docker" against &{Name:pause-598473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-598473 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:28:18.610930  164763 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:28:18.611056  164763 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:28:18.681905  164763 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-29 09:28:18.67218438 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:28:18.682322  164763 cni.go:84] Creating CNI manager for ""
	I1029 09:28:18.682398  164763 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:28:18.682456  164763 start.go:353] cluster config:
	{Name:pause-598473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-598473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:28:18.685779  164763 out.go:179] * Starting "pause-598473" primary control-plane node in "pause-598473" cluster
	I1029 09:28:18.688669  164763 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:28:18.691673  164763 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:28:18.694478  164763 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:28:18.694538  164763 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1029 09:28:18.694549  164763 cache.go:59] Caching tarball of preloaded images
	I1029 09:28:18.694631  164763 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:28:18.694642  164763 preload.go:233] Found /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1029 09:28:18.694652  164763 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 09:28:18.694790  164763 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/pause-598473/config.json ...
	I1029 09:28:18.716584  164763 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:28:18.716607  164763 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:28:18.716626  164763 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:28:18.716649  164763 start.go:360] acquireMachinesLock for pause-598473: {Name:mk72356e6ecc3129f08abe6e7883c069226381fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:28:18.716720  164763 start.go:364] duration metric: took 44.693µs to acquireMachinesLock for "pause-598473"
	I1029 09:28:18.716741  164763 start.go:96] Skipping create...Using existing machine configuration
	I1029 09:28:18.716746  164763 fix.go:54] fixHost starting: 
	I1029 09:28:18.716998  164763 cli_runner.go:164] Run: docker container inspect pause-598473 --format={{.State.Status}}
	I1029 09:28:18.733561  164763 fix.go:112] recreateIfNeeded on pause-598473: state=Running err=<nil>
	W1029 09:28:18.733591  164763 fix.go:138] unexpected machine state, will restart: <nil>
	I1029 09:28:18.736758  164763 out.go:252] * Updating the running docker "pause-598473" container ...
	I1029 09:28:18.736793  164763 machine.go:94] provisionDockerMachine start ...
	I1029 09:28:18.736871  164763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598473
	I1029 09:28:18.762143  164763 main.go:143] libmachine: Using SSH client type: native
	I1029 09:28:18.762471  164763 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33018 <nil> <nil>}
	I1029 09:28:18.762487  164763 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 09:28:18.912085  164763 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-598473
	
	I1029 09:28:18.912111  164763 ubuntu.go:182] provisioning hostname "pause-598473"
	I1029 09:28:18.912215  164763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598473
	I1029 09:28:18.931105  164763 main.go:143] libmachine: Using SSH client type: native
	I1029 09:28:18.931418  164763 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33018 <nil> <nil>}
	I1029 09:28:18.931434  164763 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-598473 && echo "pause-598473" | sudo tee /etc/hostname
	I1029 09:28:19.093958  164763 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-598473
	
	I1029 09:28:19.094039  164763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598473
	I1029 09:28:19.111937  164763 main.go:143] libmachine: Using SSH client type: native
	I1029 09:28:19.112234  164763 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33018 <nil> <nil>}
	I1029 09:28:19.112258  164763 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-598473' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-598473/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-598473' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 09:28:19.265753  164763 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 09:28:19.265780  164763 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-2763/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-2763/.minikube}
	I1029 09:28:19.265836  164763 ubuntu.go:190] setting up certificates
	I1029 09:28:19.265854  164763 provision.go:84] configureAuth start
	I1029 09:28:19.265935  164763 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-598473
	I1029 09:28:19.287106  164763 provision.go:143] copyHostCerts
	I1029 09:28:19.287177  164763 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem, removing ...
	I1029 09:28:19.287196  164763 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 09:28:19.287271  164763 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem (1082 bytes)
	I1029 09:28:19.287384  164763 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem, removing ...
	I1029 09:28:19.287395  164763 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 09:28:19.287424  164763 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem (1123 bytes)
	I1029 09:28:19.287532  164763 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem, removing ...
	I1029 09:28:19.287544  164763 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 09:28:19.287572  164763 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem (1679 bytes)
	I1029 09:28:19.287635  164763 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem org=jenkins.pause-598473 san=[127.0.0.1 192.168.85.2 localhost minikube pause-598473]
	I1029 09:28:19.810962  164763 provision.go:177] copyRemoteCerts
	I1029 09:28:19.811028  164763 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 09:28:19.811067  164763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598473
	I1029 09:28:19.833484  164763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/pause-598473/id_rsa Username:docker}
	I1029 09:28:19.940485  164763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 09:28:19.959265  164763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1029 09:28:19.977772  164763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 09:28:19.995613  164763 provision.go:87] duration metric: took 729.724891ms to configureAuth
	I1029 09:28:19.995682  164763 ubuntu.go:206] setting minikube options for container-runtime
	I1029 09:28:19.995915  164763 config.go:182] Loaded profile config "pause-598473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:28:19.996038  164763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598473
	I1029 09:28:20.019428  164763 main.go:143] libmachine: Using SSH client type: native
	I1029 09:28:20.019748  164763 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33018 <nil> <nil>}
	I1029 09:28:20.019772  164763 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 09:28:20.969584  148690 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.077229735s)
	W1029 09:28:20.969618  148690 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1029 09:28:20.969626  148690 logs.go:123] Gathering logs for kube-apiserver [2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4] ...
	I1029 09:28:20.969638  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4"
	I1029 09:28:21.016043  148690 logs.go:123] Gathering logs for kube-scheduler [a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3] ...
	I1029 09:28:21.016074  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3"
	I1029 09:28:21.080676  148690 logs.go:123] Gathering logs for CRI-O ...
	I1029 09:28:21.080711  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1029 09:28:21.144444  148690 logs.go:123] Gathering logs for container status ...
	I1029 09:28:21.144480  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1029 09:28:21.177867  148690 logs.go:123] Gathering logs for dmesg ...
	I1029 09:28:21.177898  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1029 09:28:21.192747  148690 logs.go:123] Gathering logs for kube-apiserver [bb0061ed47eff52a616c7b3b6a8b792cefd4ee02f4b8ac6d642a481865ce425e] ...
	I1029 09:28:21.192778  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb0061ed47eff52a616c7b3b6a8b792cefd4ee02f4b8ac6d642a481865ce425e"
	I1029 09:28:21.228562  148690 logs.go:123] Gathering logs for kube-controller-manager [317bb4a9c66839659b5364da8d10f652daeda390db9cc195088eab22768b5e94] ...
	I1029 09:28:21.228592  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 317bb4a9c66839659b5364da8d10f652daeda390db9cc195088eab22768b5e94"
	I1029 09:28:23.758364  148690 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1029 09:28:24.985032  148690 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:40784->192.168.76.2:8443: read: connection reset by peer
	I1029 09:28:24.985085  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1029 09:28:24.985147  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1029 09:28:25.020076  148690 cri.go:89] found id: "2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4"
	I1029 09:28:25.020097  148690 cri.go:89] found id: "bb0061ed47eff52a616c7b3b6a8b792cefd4ee02f4b8ac6d642a481865ce425e"
	I1029 09:28:25.020101  148690 cri.go:89] found id: ""
	I1029 09:28:25.020109  148690 logs.go:282] 2 containers: [2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4 bb0061ed47eff52a616c7b3b6a8b792cefd4ee02f4b8ac6d642a481865ce425e]
	I1029 09:28:25.020168  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:25.024264  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:25.028073  148690 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1029 09:28:25.028149  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1029 09:28:25.053676  148690 cri.go:89] found id: ""
	I1029 09:28:25.053699  148690 logs.go:282] 0 containers: []
	W1029 09:28:25.053707  148690 logs.go:284] No container was found matching "etcd"
	I1029 09:28:25.053713  148690 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1029 09:28:25.053769  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1029 09:28:25.081403  148690 cri.go:89] found id: ""
	I1029 09:28:25.081427  148690 logs.go:282] 0 containers: []
	W1029 09:28:25.081435  148690 logs.go:284] No container was found matching "coredns"
	I1029 09:28:25.081442  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1029 09:28:25.081496  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1029 09:28:25.108974  148690 cri.go:89] found id: "a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3"
	I1029 09:28:25.108997  148690 cri.go:89] found id: ""
	I1029 09:28:25.109005  148690 logs.go:282] 1 containers: [a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3]
	I1029 09:28:25.109059  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:25.112772  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1029 09:28:25.112844  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1029 09:28:25.142997  148690 cri.go:89] found id: ""
	I1029 09:28:25.143022  148690 logs.go:282] 0 containers: []
	W1029 09:28:25.143031  148690 logs.go:284] No container was found matching "kube-proxy"
	I1029 09:28:25.143039  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1029 09:28:25.143096  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1029 09:28:25.169963  148690 cri.go:89] found id: "431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa"
	I1029 09:28:25.169984  148690 cri.go:89] found id: "317bb4a9c66839659b5364da8d10f652daeda390db9cc195088eab22768b5e94"
	I1029 09:28:25.169989  148690 cri.go:89] found id: ""
	I1029 09:28:25.169996  148690 logs.go:282] 2 containers: [431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa 317bb4a9c66839659b5364da8d10f652daeda390db9cc195088eab22768b5e94]
	I1029 09:28:25.170049  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:25.173948  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:25.177606  148690 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1029 09:28:25.177675  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1029 09:28:25.217949  148690 cri.go:89] found id: ""
	I1029 09:28:25.217974  148690 logs.go:282] 0 containers: []
	W1029 09:28:25.217983  148690 logs.go:284] No container was found matching "kindnet"
	I1029 09:28:25.217990  148690 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1029 09:28:25.218046  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1029 09:28:25.251847  148690 cri.go:89] found id: ""
	I1029 09:28:25.251872  148690 logs.go:282] 0 containers: []
	W1029 09:28:25.251881  148690 logs.go:284] No container was found matching "storage-provisioner"
	I1029 09:28:25.251894  148690 logs.go:123] Gathering logs for kubelet ...
	I1029 09:28:25.251905  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1029 09:28:25.384105  148690 logs.go:123] Gathering logs for describe nodes ...
	I1029 09:28:25.384178  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1029 09:28:25.414243  164763 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 09:28:25.414263  164763 machine.go:97] duration metric: took 6.677461773s to provisionDockerMachine
	I1029 09:28:25.414274  164763 start.go:293] postStartSetup for "pause-598473" (driver="docker")
	I1029 09:28:25.414284  164763 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 09:28:25.414340  164763 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 09:28:25.414379  164763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598473
	I1029 09:28:25.436163  164763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/pause-598473/id_rsa Username:docker}
	I1029 09:28:25.541571  164763 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 09:28:25.546255  164763 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 09:28:25.546285  164763 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 09:28:25.546296  164763 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/addons for local assets ...
	I1029 09:28:25.546348  164763 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/files for local assets ...
	I1029 09:28:25.546437  164763 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> 45502.pem in /etc/ssl/certs
	I1029 09:28:25.546547  164763 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 09:28:25.556505  164763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /etc/ssl/certs/45502.pem (1708 bytes)
	I1029 09:28:25.580087  164763 start.go:296] duration metric: took 165.798591ms for postStartSetup
	I1029 09:28:25.580184  164763 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 09:28:25.580242  164763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598473
	I1029 09:28:25.599370  164763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/pause-598473/id_rsa Username:docker}
	I1029 09:28:25.714877  164763 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 09:28:25.726292  164763 fix.go:56] duration metric: took 7.009538286s for fixHost
	I1029 09:28:25.726331  164763 start.go:83] releasing machines lock for "pause-598473", held for 7.009590274s
	I1029 09:28:25.726410  164763 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-598473
	I1029 09:28:25.749263  164763 ssh_runner.go:195] Run: cat /version.json
	I1029 09:28:25.749330  164763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598473
	I1029 09:28:25.749692  164763 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 09:28:25.749760  164763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598473
	I1029 09:28:25.778051  164763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/pause-598473/id_rsa Username:docker}
	I1029 09:28:25.791540  164763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/pause-598473/id_rsa Username:docker}
	I1029 09:28:25.991089  164763 ssh_runner.go:195] Run: systemctl --version
	I1029 09:28:25.997639  164763 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 09:28:26.044171  164763 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 09:28:26.049499  164763 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 09:28:26.049634  164763 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 09:28:26.057619  164763 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1029 09:28:26.057643  164763 start.go:496] detecting cgroup driver to use...
	I1029 09:28:26.057696  164763 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1029 09:28:26.057753  164763 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 09:28:26.073500  164763 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 09:28:26.086740  164763 docker.go:218] disabling cri-docker service (if available) ...
	I1029 09:28:26.086801  164763 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 09:28:26.102047  164763 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 09:28:26.115294  164763 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 09:28:26.253869  164763 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 09:28:26.392991  164763 docker.go:234] disabling docker service ...
	I1029 09:28:26.393141  164763 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 09:28:26.408202  164763 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 09:28:26.421559  164763 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 09:28:26.557979  164763 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 09:28:26.701796  164763 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 09:28:26.715144  164763 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 09:28:26.729791  164763 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 09:28:26.729854  164763 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:28:26.738691  164763 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1029 09:28:26.738763  164763 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:28:26.747936  164763 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:28:26.757557  164763 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:28:26.766266  164763 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 09:28:26.774488  164763 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:28:26.783641  164763 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:28:26.791734  164763 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:28:26.800528  164763 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 09:28:26.808090  164763 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 09:28:26.815459  164763 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:28:26.955546  164763 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 09:28:27.302102  164763 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 09:28:27.302216  164763 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 09:28:27.306116  164763 start.go:564] Will wait 60s for crictl version
	I1029 09:28:27.306238  164763 ssh_runner.go:195] Run: which crictl
	I1029 09:28:27.309754  164763 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 09:28:27.332672  164763 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 09:28:27.332840  164763 ssh_runner.go:195] Run: crio --version
	I1029 09:28:27.376425  164763 ssh_runner.go:195] Run: crio --version
	I1029 09:28:27.430706  164763 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 09:28:27.434998  164763 cli_runner.go:164] Run: docker network inspect pause-598473 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:28:27.460673  164763 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1029 09:28:27.465949  164763 kubeadm.go:884] updating cluster {Name:pause-598473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-598473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 09:28:27.466108  164763 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:28:27.466159  164763 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:28:27.541464  164763 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:28:27.541484  164763 crio.go:433] Images already preloaded, skipping extraction
	I1029 09:28:27.541545  164763 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:28:27.621617  164763 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:28:27.621686  164763 cache_images.go:86] Images are preloaded, skipping loading
	I1029 09:28:27.621716  164763 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1029 09:28:27.621861  164763 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-598473 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-598473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 09:28:27.621981  164763 ssh_runner.go:195] Run: crio config
	I1029 09:28:27.774223  164763 cni.go:84] Creating CNI manager for ""
	I1029 09:28:27.774295  164763 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:28:27.774334  164763 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1029 09:28:27.774391  164763 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-598473 NodeName:pause-598473 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 09:28:27.774582  164763 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-598473"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 09:28:27.774693  164763 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 09:28:27.788763  164763 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 09:28:27.788914  164763 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 09:28:27.801613  164763 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1029 09:28:27.826703  164763 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 09:28:27.848062  164763 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1029 09:28:27.869428  164763 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1029 09:28:27.873508  164763 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:28:28.169026  164763 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:28:28.187148  164763 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/pause-598473 for IP: 192.168.85.2
	I1029 09:28:28.187216  164763 certs.go:195] generating shared ca certs ...
	I1029 09:28:28.187248  164763 certs.go:227] acquiring lock for ca certs: {Name:mk715611293338c39436fe9072c5ce0c2fb993a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:28:28.187442  164763 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key
	I1029 09:28:28.187536  164763 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key
	I1029 09:28:28.187564  164763 certs.go:257] generating profile certs ...
	I1029 09:28:28.187707  164763 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/pause-598473/client.key
	I1029 09:28:28.187841  164763 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/pause-598473/apiserver.key.62d36ef7
	I1029 09:28:28.188186  164763 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/pause-598473/proxy-client.key
	I1029 09:28:28.196091  164763 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem (1338 bytes)
	W1029 09:28:28.196195  164763 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550_empty.pem, impossibly tiny 0 bytes
	I1029 09:28:28.196238  164763 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem (1679 bytes)
	I1029 09:28:28.196292  164763 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem (1082 bytes)
	I1029 09:28:28.196372  164763 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem (1123 bytes)
	I1029 09:28:28.196436  164763 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem (1679 bytes)
	I1029 09:28:28.196525  164763 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem (1708 bytes)
	I1029 09:28:28.197206  164763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 09:28:28.247854  164763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 09:28:28.289299  164763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 09:28:28.321465  164763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1029 09:28:28.350248  164763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/pause-598473/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1029 09:28:28.400483  164763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/pause-598473/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 09:28:28.488096  164763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/pause-598473/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 09:28:28.535751  164763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/pause-598473/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1029 09:28:28.581395  164763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem --> /usr/share/ca-certificates/4550.pem (1338 bytes)
	I1029 09:28:28.622202  164763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /usr/share/ca-certificates/45502.pem (1708 bytes)
	I1029 09:28:28.659111  164763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 09:28:28.712129  164763 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 09:28:28.750632  164763 ssh_runner.go:195] Run: openssl version
	I1029 09:28:28.775443  164763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4550.pem && ln -fs /usr/share/ca-certificates/4550.pem /etc/ssl/certs/4550.pem"
	I1029 09:28:28.794725  164763 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4550.pem
	I1029 09:28:28.799652  164763 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:27 /usr/share/ca-certificates/4550.pem
	I1029 09:28:28.799791  164763 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4550.pem
	I1029 09:28:28.883091  164763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4550.pem /etc/ssl/certs/51391683.0"
	I1029 09:28:28.912160  164763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/45502.pem && ln -fs /usr/share/ca-certificates/45502.pem /etc/ssl/certs/45502.pem"
	I1029 09:28:28.923994  164763 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/45502.pem
	I1029 09:28:28.931938  164763 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:27 /usr/share/ca-certificates/45502.pem
	I1029 09:28:28.931999  164763 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/45502.pem
	I1029 09:28:29.034360  164763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/45502.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 09:28:29.061910  164763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 09:28:29.086916  164763 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:28:29.093249  164763 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:28:29.093313  164763 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:28:29.207119  164763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 09:28:29.220277  164763 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 09:28:29.228408  164763 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1029 09:28:29.407590  164763 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1029 09:28:29.528752  164763 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1029 09:28:29.577884  164763 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1029 09:28:29.621166  164763 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1029 09:28:29.662729  164763 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1029 09:28:29.713150  164763 kubeadm.go:401] StartCluster: {Name:pause-598473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-598473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:28:29.713364  164763 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 09:28:29.713461  164763 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 09:28:29.752792  164763 cri.go:89] found id: "7dda4d8e9247e4c48e0541c08b16da24318bbccc701f139085f1241779fd5f7c"
	I1029 09:28:29.752868  164763 cri.go:89] found id: "a40e8b1aec6cf4ffb89caaf694a74a0457d97ce61bc68f6103e3910054c0228b"
	I1029 09:28:29.752889  164763 cri.go:89] found id: "99ffc1b8e15dbc056c0325ee48e8afa68322220323c386b6bedb1c4a2ee5e455"
	I1029 09:28:29.752912  164763 cri.go:89] found id: "e240f2f193b4ad7983ce46038c61646263f3c3252a816cfb9eb501adbc10637f"
	I1029 09:28:29.752946  164763 cri.go:89] found id: "b54a7a1da46f4878031777e1d18042b4b4bba0e73a5204cb18e65a98dfe4bf56"
	I1029 09:28:29.752971  164763 cri.go:89] found id: "3f24d3b0d17159e35ec8ac73b72ecde2d13c87c4ce788a4d8aece1755628f8b4"
	I1029 09:28:29.752993  164763 cri.go:89] found id: "7257e194f3686f6d742fd1cd0d89139b8bd26bf067856ef661f029216e99b096"
	I1029 09:28:29.753025  164763 cri.go:89] found id: "d384a6fc7e5d0182de7245d870f2c33ac8483358e6f6ac6db5e18ba13fa7d9d8"
	I1029 09:28:29.753046  164763 cri.go:89] found id: "c190688eeeb79e9c923c6ec33de1858543704894afd50ecdb214f8e4111e298c"
	I1029 09:28:29.753072  164763 cri.go:89] found id: "8747eed7a27641339a70bdff96979ff32978a82c63e891fbc1950d2e489f7e1c"
	I1029 09:28:29.753108  164763 cri.go:89] found id: "ca16a1729c7691f2ea4057d58e8323e20627757b080269568c8ba95cd450fa92"
	I1029 09:28:29.753130  164763 cri.go:89] found id: "8027a2710b597803179f3d65c316ab838f3c511999b97880ec0b1f59441db3cd"
	I1029 09:28:29.753152  164763 cri.go:89] found id: "331659622ea96eb65f7a270e3e1d8f8fa9f2d2eddfd4e3e8bba99a26abb753dd"
	I1029 09:28:29.753188  164763 cri.go:89] found id: "2413d471a2a4209e069fb08050610258c0805f09213c5cf465ffa1c188508fa8"
	I1029 09:28:29.753211  164763 cri.go:89] found id: ""
	I1029 09:28:29.753294  164763 ssh_runner.go:195] Run: sudo runc list -f json
	W1029 09:28:29.773622  164763 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:28:29Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:28:29.773757  164763 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 09:28:29.788175  164763 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1029 09:28:29.788251  164763 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1029 09:28:29.788367  164763 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1029 09:28:29.801568  164763 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1029 09:28:29.802372  164763 kubeconfig.go:125] found "pause-598473" server: "https://192.168.85.2:8443"
	I1029 09:28:29.803411  164763 kapi.go:59] client config for pause-598473: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/pause-598473/client.crt", KeyFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/pause-598473/client.key", CAFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1029 09:28:29.804104  164763 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1029 09:28:29.804190  164763 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1029 09:28:29.804214  164763 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1029 09:28:29.804235  164763 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1029 09:28:29.804272  164763 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1029 09:28:29.804715  164763 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1029 09:28:29.817506  164763 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1029 09:28:29.817589  164763 kubeadm.go:602] duration metric: took 29.293291ms to restartPrimaryControlPlane
	I1029 09:28:29.817617  164763 kubeadm.go:403] duration metric: took 104.476114ms to StartCluster
	I1029 09:28:29.817647  164763 settings.go:142] acquiring lock: {Name:mkeba1c0d8e3656c2a05b6b6f1f81184498df216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:28:29.817753  164763 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:28:29.818664  164763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:28:29.818947  164763 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:28:29.819367  164763 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 09:28:29.819729  164763 config.go:182] Loaded profile config "pause-598473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:28:29.823340  164763 out.go:179] * Enabled addons: 
	I1029 09:28:29.823442  164763 out.go:179] * Verifying Kubernetes components...
	W1029 09:28:25.476859  148690 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1029 09:28:25.476876  148690 logs.go:123] Gathering logs for kube-apiserver [bb0061ed47eff52a616c7b3b6a8b792cefd4ee02f4b8ac6d642a481865ce425e] ...
	I1029 09:28:25.476895  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb0061ed47eff52a616c7b3b6a8b792cefd4ee02f4b8ac6d642a481865ce425e"
	I1029 09:28:25.512046  148690 logs.go:123] Gathering logs for kube-controller-manager [431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa] ...
	I1029 09:28:25.512078  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa"
	I1029 09:28:25.544696  148690 logs.go:123] Gathering logs for container status ...
	I1029 09:28:25.544720  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1029 09:28:25.586941  148690 logs.go:123] Gathering logs for dmesg ...
	I1029 09:28:25.586964  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1029 09:28:25.604220  148690 logs.go:123] Gathering logs for kube-apiserver [2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4] ...
	I1029 09:28:25.604303  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4"
	I1029 09:28:25.648623  148690 logs.go:123] Gathering logs for kube-scheduler [a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3] ...
	I1029 09:28:25.649333  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3"
	I1029 09:28:25.717122  148690 logs.go:123] Gathering logs for kube-controller-manager [317bb4a9c66839659b5364da8d10f652daeda390db9cc195088eab22768b5e94] ...
	I1029 09:28:25.717173  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 317bb4a9c66839659b5364da8d10f652daeda390db9cc195088eab22768b5e94"
	I1029 09:28:25.770685  148690 logs.go:123] Gathering logs for CRI-O ...
	I1029 09:28:25.770714  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1029 09:28:28.358170  148690 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1029 09:28:28.358506  148690 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:28:28.358542  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1029 09:28:28.358595  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1029 09:28:28.433046  148690 cri.go:89] found id: "2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4"
	I1029 09:28:28.433064  148690 cri.go:89] found id: ""
	I1029 09:28:28.433071  148690 logs.go:282] 1 containers: [2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4]
	I1029 09:28:28.433121  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:28.436959  148690 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1029 09:28:28.437029  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1029 09:28:28.485188  148690 cri.go:89] found id: ""
	I1029 09:28:28.485209  148690 logs.go:282] 0 containers: []
	W1029 09:28:28.485217  148690 logs.go:284] No container was found matching "etcd"
	I1029 09:28:28.485226  148690 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1029 09:28:28.485283  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1029 09:28:28.541016  148690 cri.go:89] found id: ""
	I1029 09:28:28.541038  148690 logs.go:282] 0 containers: []
	W1029 09:28:28.541046  148690 logs.go:284] No container was found matching "coredns"
	I1029 09:28:28.541053  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1029 09:28:28.541107  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1029 09:28:28.589048  148690 cri.go:89] found id: "a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3"
	I1029 09:28:28.589067  148690 cri.go:89] found id: ""
	I1029 09:28:28.589075  148690 logs.go:282] 1 containers: [a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3]
	I1029 09:28:28.589139  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:28.595246  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1029 09:28:28.595311  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1029 09:28:28.658455  148690 cri.go:89] found id: ""
	I1029 09:28:28.658476  148690 logs.go:282] 0 containers: []
	W1029 09:28:28.658484  148690 logs.go:284] No container was found matching "kube-proxy"
	I1029 09:28:28.658491  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1029 09:28:28.658546  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1029 09:28:28.705155  148690 cri.go:89] found id: "431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa"
	I1029 09:28:28.705174  148690 cri.go:89] found id: "317bb4a9c66839659b5364da8d10f652daeda390db9cc195088eab22768b5e94"
	I1029 09:28:28.705179  148690 cri.go:89] found id: ""
	I1029 09:28:28.705186  148690 logs.go:282] 2 containers: [431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa 317bb4a9c66839659b5364da8d10f652daeda390db9cc195088eab22768b5e94]
	I1029 09:28:28.705240  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:28.709151  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:28.716747  148690 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1029 09:28:28.716820  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1029 09:28:28.782600  148690 cri.go:89] found id: ""
	I1029 09:28:28.782620  148690 logs.go:282] 0 containers: []
	W1029 09:28:28.782629  148690 logs.go:284] No container was found matching "kindnet"
	I1029 09:28:28.782635  148690 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1029 09:28:28.782688  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1029 09:28:28.830553  148690 cri.go:89] found id: ""
	I1029 09:28:28.830575  148690 logs.go:282] 0 containers: []
	W1029 09:28:28.830583  148690 logs.go:284] No container was found matching "storage-provisioner"
	I1029 09:28:28.830597  148690 logs.go:123] Gathering logs for kubelet ...
	I1029 09:28:28.830608  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1029 09:28:28.990311  148690 logs.go:123] Gathering logs for dmesg ...
	I1029 09:28:28.990394  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1029 09:28:29.012850  148690 logs.go:123] Gathering logs for kube-scheduler [a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3] ...
	I1029 09:28:29.013029  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3"
	I1029 09:28:29.122132  148690 logs.go:123] Gathering logs for kube-controller-manager [317bb4a9c66839659b5364da8d10f652daeda390db9cc195088eab22768b5e94] ...
	I1029 09:28:29.122207  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 317bb4a9c66839659b5364da8d10f652daeda390db9cc195088eab22768b5e94"
	I1029 09:28:29.178420  148690 logs.go:123] Gathering logs for describe nodes ...
	I1029 09:28:29.178446  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1029 09:28:29.293168  148690 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1029 09:28:29.293187  148690 logs.go:123] Gathering logs for kube-apiserver [2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4] ...
	I1029 09:28:29.293199  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4"
	I1029 09:28:29.355867  148690 logs.go:123] Gathering logs for kube-controller-manager [431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa] ...
	I1029 09:28:29.355937  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa"
	I1029 09:28:29.391111  148690 logs.go:123] Gathering logs for CRI-O ...
	I1029 09:28:29.391188  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1029 09:28:29.471816  148690 logs.go:123] Gathering logs for container status ...
	I1029 09:28:29.471897  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1029 09:28:29.826140  164763 addons.go:515] duration metric: took 6.76969ms for enable addons: enabled=[]
	I1029 09:28:29.826248  164763 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:28:30.076497  164763 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:28:30.099849  164763 node_ready.go:35] waiting up to 6m0s for node "pause-598473" to be "Ready" ...
	I1029 09:28:32.041229  148690 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1029 09:28:32.041584  148690 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:28:32.041622  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1029 09:28:32.041672  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1029 09:28:32.087119  148690 cri.go:89] found id: "2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4"
	I1029 09:28:32.087138  148690 cri.go:89] found id: ""
	I1029 09:28:32.087146  148690 logs.go:282] 1 containers: [2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4]
	I1029 09:28:32.087201  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:32.093859  148690 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1029 09:28:32.093930  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1029 09:28:32.132217  148690 cri.go:89] found id: ""
	I1029 09:28:32.132237  148690 logs.go:282] 0 containers: []
	W1029 09:28:32.132245  148690 logs.go:284] No container was found matching "etcd"
	I1029 09:28:32.132252  148690 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1029 09:28:32.132332  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1029 09:28:32.180807  148690 cri.go:89] found id: ""
	I1029 09:28:32.180828  148690 logs.go:282] 0 containers: []
	W1029 09:28:32.180836  148690 logs.go:284] No container was found matching "coredns"
	I1029 09:28:32.180842  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1029 09:28:32.180897  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1029 09:28:32.217111  148690 cri.go:89] found id: "a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3"
	I1029 09:28:32.217129  148690 cri.go:89] found id: ""
	I1029 09:28:32.217137  148690 logs.go:282] 1 containers: [a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3]
	I1029 09:28:32.217188  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:32.221475  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1029 09:28:32.221594  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1029 09:28:32.277740  148690 cri.go:89] found id: ""
	I1029 09:28:32.277761  148690 logs.go:282] 0 containers: []
	W1029 09:28:32.277769  148690 logs.go:284] No container was found matching "kube-proxy"
	I1029 09:28:32.277775  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1029 09:28:32.277829  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1029 09:28:32.321267  148690 cri.go:89] found id: "431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa"
	I1029 09:28:32.321286  148690 cri.go:89] found id: "317bb4a9c66839659b5364da8d10f652daeda390db9cc195088eab22768b5e94"
	I1029 09:28:32.321291  148690 cri.go:89] found id: ""
	I1029 09:28:32.321298  148690 logs.go:282] 2 containers: [431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa 317bb4a9c66839659b5364da8d10f652daeda390db9cc195088eab22768b5e94]
	I1029 09:28:32.321352  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:32.328366  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:32.332556  148690 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1029 09:28:32.332782  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1029 09:28:32.386253  148690 cri.go:89] found id: ""
	I1029 09:28:32.386336  148690 logs.go:282] 0 containers: []
	W1029 09:28:32.386360  148690 logs.go:284] No container was found matching "kindnet"
	I1029 09:28:32.386401  148690 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1029 09:28:32.386499  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1029 09:28:32.429648  148690 cri.go:89] found id: ""
	I1029 09:28:32.429670  148690 logs.go:282] 0 containers: []
	W1029 09:28:32.429679  148690 logs.go:284] No container was found matching "storage-provisioner"
	I1029 09:28:32.429693  148690 logs.go:123] Gathering logs for container status ...
	I1029 09:28:32.429704  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1029 09:28:32.502853  148690 logs.go:123] Gathering logs for describe nodes ...
	I1029 09:28:32.502935  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1029 09:28:32.624900  148690 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1029 09:28:32.624958  148690 logs.go:123] Gathering logs for kube-apiserver [2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4] ...
	I1029 09:28:32.624995  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4"
	I1029 09:28:32.702365  148690 logs.go:123] Gathering logs for kube-controller-manager [431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa] ...
	I1029 09:28:32.702439  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa"
	I1029 09:28:32.754053  148690 logs.go:123] Gathering logs for kube-controller-manager [317bb4a9c66839659b5364da8d10f652daeda390db9cc195088eab22768b5e94] ...
	I1029 09:28:32.754079  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 317bb4a9c66839659b5364da8d10f652daeda390db9cc195088eab22768b5e94"
	I1029 09:28:32.814231  148690 logs.go:123] Gathering logs for kubelet ...
	I1029 09:28:32.814256  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1029 09:28:32.958252  148690 logs.go:123] Gathering logs for dmesg ...
	I1029 09:28:32.958329  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1029 09:28:32.975510  148690 logs.go:123] Gathering logs for kube-scheduler [a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3] ...
	I1029 09:28:32.975651  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3"
	I1029 09:28:33.056645  148690 logs.go:123] Gathering logs for CRI-O ...
	I1029 09:28:33.056720  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1029 09:28:33.829807  164763 node_ready.go:49] node "pause-598473" is "Ready"
	I1029 09:28:33.829851  164763 node_ready.go:38] duration metric: took 3.729906134s for node "pause-598473" to be "Ready" ...
	I1029 09:28:33.829865  164763 api_server.go:52] waiting for apiserver process to appear ...
	I1029 09:28:33.829927  164763 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:28:33.849963  164763 api_server.go:72] duration metric: took 4.030953784s to wait for apiserver process to appear ...
	I1029 09:28:33.849985  164763 api_server.go:88] waiting for apiserver healthz status ...
	I1029 09:28:33.850005  164763 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:28:33.923972  164763 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 09:28:33.924043  164763 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1029 09:28:34.350261  164763 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:28:34.364845  164763 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 09:28:34.364913  164763 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1029 09:28:34.850218  164763 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:28:34.858184  164763 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1029 09:28:34.859218  164763 api_server.go:141] control plane version: v1.34.1
	I1029 09:28:34.859242  164763 api_server.go:131] duration metric: took 1.0092501s to wait for apiserver health ...
	I1029 09:28:34.859250  164763 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 09:28:34.862663  164763 system_pods.go:59] 7 kube-system pods found
	I1029 09:28:34.862701  164763 system_pods.go:61] "coredns-66bc5c9577-tkwf6" [8d843afb-d055-43fc-92e1-8816da3ab88b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:28:34.862711  164763 system_pods.go:61] "etcd-pause-598473" [c0e187a0-e38a-44e0-b57c-abc11d5e4c6b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:28:34.862719  164763 system_pods.go:61] "kindnet-g6xj4" [73a37546-9547-4ab6-a47d-2ba7197a11f5] Running
	I1029 09:28:34.862726  164763 system_pods.go:61] "kube-apiserver-pause-598473" [6a93240c-59bc-46b5-9b69-af188f338ea5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:28:34.862733  164763 system_pods.go:61] "kube-controller-manager-pause-598473" [f993caa8-139d-41a6-800c-7f0e16805c9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:28:34.862737  164763 system_pods.go:61] "kube-proxy-tjggg" [d87db520-c253-4583-9374-28fcc707d1dd] Running
	I1029 09:28:34.862746  164763 system_pods.go:61] "kube-scheduler-pause-598473" [183456c1-44f2-4a58-ba59-285f59ed7268] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:28:34.862759  164763 system_pods.go:74] duration metric: took 3.500809ms to wait for pod list to return data ...
	I1029 09:28:34.862768  164763 default_sa.go:34] waiting for default service account to be created ...
	I1029 09:28:34.864653  164763 default_sa.go:45] found service account: "default"
	I1029 09:28:34.864677  164763 default_sa.go:55] duration metric: took 1.898582ms for default service account to be created ...
	I1029 09:28:34.864689  164763 system_pods.go:116] waiting for k8s-apps to be running ...
	I1029 09:28:34.867300  164763 system_pods.go:86] 7 kube-system pods found
	I1029 09:28:34.867333  164763 system_pods.go:89] "coredns-66bc5c9577-tkwf6" [8d843afb-d055-43fc-92e1-8816da3ab88b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:28:34.867342  164763 system_pods.go:89] "etcd-pause-598473" [c0e187a0-e38a-44e0-b57c-abc11d5e4c6b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:28:34.867348  164763 system_pods.go:89] "kindnet-g6xj4" [73a37546-9547-4ab6-a47d-2ba7197a11f5] Running
	I1029 09:28:34.867383  164763 system_pods.go:89] "kube-apiserver-pause-598473" [6a93240c-59bc-46b5-9b69-af188f338ea5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:28:34.867398  164763 system_pods.go:89] "kube-controller-manager-pause-598473" [f993caa8-139d-41a6-800c-7f0e16805c9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:28:34.867416  164763 system_pods.go:89] "kube-proxy-tjggg" [d87db520-c253-4583-9374-28fcc707d1dd] Running
	I1029 09:28:34.867423  164763 system_pods.go:89] "kube-scheduler-pause-598473" [183456c1-44f2-4a58-ba59-285f59ed7268] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:28:34.867431  164763 system_pods.go:126] duration metric: took 2.735929ms to wait for k8s-apps to be running ...
	I1029 09:28:34.867459  164763 system_svc.go:44] waiting for kubelet service to be running ....
	I1029 09:28:34.867531  164763 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:28:34.881634  164763 system_svc.go:56] duration metric: took 14.182906ms WaitForService to wait for kubelet
	I1029 09:28:34.881665  164763 kubeadm.go:587] duration metric: took 5.062659688s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:28:34.881692  164763 node_conditions.go:102] verifying NodePressure condition ...
	I1029 09:28:34.884098  164763 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1029 09:28:34.884129  164763 node_conditions.go:123] node cpu capacity is 2
	I1029 09:28:34.884141  164763 node_conditions.go:105] duration metric: took 2.443784ms to run NodePressure ...
	I1029 09:28:34.884153  164763 start.go:242] waiting for startup goroutines ...
	I1029 09:28:34.884161  164763 start.go:247] waiting for cluster config update ...
	I1029 09:28:34.884169  164763 start.go:256] writing updated cluster config ...
	I1029 09:28:34.884572  164763 ssh_runner.go:195] Run: rm -f paused
	I1029 09:28:34.888335  164763 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:28:34.888954  164763 kapi.go:59] client config for pause-598473: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/pause-598473/client.crt", KeyFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/pause-598473/client.key", CAFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1029 09:28:34.891930  164763 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tkwf6" in "kube-system" namespace to be "Ready" or be gone ...
	W1029 09:28:36.896813  164763 pod_ready.go:104] pod "coredns-66bc5c9577-tkwf6" is not "Ready", error: <nil>
	I1029 09:28:35.627579  148690 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1029 09:28:35.628008  148690 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:28:35.628061  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1029 09:28:35.628119  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1029 09:28:35.657268  148690 cri.go:89] found id: "2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4"
	I1029 09:28:35.657289  148690 cri.go:89] found id: ""
	I1029 09:28:35.657297  148690 logs.go:282] 1 containers: [2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4]
	I1029 09:28:35.657381  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:35.661143  148690 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1029 09:28:35.661220  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1029 09:28:35.685798  148690 cri.go:89] found id: ""
	I1029 09:28:35.685822  148690 logs.go:282] 0 containers: []
	W1029 09:28:35.685831  148690 logs.go:284] No container was found matching "etcd"
	I1029 09:28:35.685838  148690 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1029 09:28:35.685892  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1029 09:28:35.711480  148690 cri.go:89] found id: ""
	I1029 09:28:35.711513  148690 logs.go:282] 0 containers: []
	W1029 09:28:35.711522  148690 logs.go:284] No container was found matching "coredns"
	I1029 09:28:35.711531  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1029 09:28:35.711588  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1029 09:28:35.743827  148690 cri.go:89] found id: "a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3"
	I1029 09:28:35.743854  148690 cri.go:89] found id: ""
	I1029 09:28:35.743870  148690 logs.go:282] 1 containers: [a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3]
	I1029 09:28:35.743922  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:35.748108  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1029 09:28:35.748179  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1029 09:28:35.776972  148690 cri.go:89] found id: ""
	I1029 09:28:35.776997  148690 logs.go:282] 0 containers: []
	W1029 09:28:35.777006  148690 logs.go:284] No container was found matching "kube-proxy"
	I1029 09:28:35.777013  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1029 09:28:35.777070  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1029 09:28:35.814721  148690 cri.go:89] found id: "431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa"
	I1029 09:28:35.814743  148690 cri.go:89] found id: ""
	I1029 09:28:35.814753  148690 logs.go:282] 1 containers: [431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa]
	I1029 09:28:35.814809  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:35.818665  148690 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1029 09:28:35.818738  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1029 09:28:35.848104  148690 cri.go:89] found id: ""
	I1029 09:28:35.848134  148690 logs.go:282] 0 containers: []
	W1029 09:28:35.848148  148690 logs.go:284] No container was found matching "kindnet"
	I1029 09:28:35.848155  148690 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1029 09:28:35.848238  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1029 09:28:35.876542  148690 cri.go:89] found id: ""
	I1029 09:28:35.876566  148690 logs.go:282] 0 containers: []
	W1029 09:28:35.876575  148690 logs.go:284] No container was found matching "storage-provisioner"
	I1029 09:28:35.876584  148690 logs.go:123] Gathering logs for kube-apiserver [2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4] ...
	I1029 09:28:35.876600  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4"
	I1029 09:28:35.918708  148690 logs.go:123] Gathering logs for kube-scheduler [a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3] ...
	I1029 09:28:35.918738  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3"
	I1029 09:28:35.986320  148690 logs.go:123] Gathering logs for kube-controller-manager [431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa] ...
	I1029 09:28:35.986355  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa"
	I1029 09:28:36.033628  148690 logs.go:123] Gathering logs for CRI-O ...
	I1029 09:28:36.033708  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1029 09:28:36.129568  148690 logs.go:123] Gathering logs for container status ...
	I1029 09:28:36.129646  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1029 09:28:36.177856  148690 logs.go:123] Gathering logs for kubelet ...
	I1029 09:28:36.177928  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1029 09:28:36.331019  148690 logs.go:123] Gathering logs for dmesg ...
	I1029 09:28:36.331053  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1029 09:28:36.349162  148690 logs.go:123] Gathering logs for describe nodes ...
	I1029 09:28:36.349198  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1029 09:28:36.431707  148690 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1029 09:28:38.933165  148690 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1029 09:28:38.933549  148690 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:28:38.933633  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1029 09:28:38.933713  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1029 09:28:38.963867  148690 cri.go:89] found id: "2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4"
	I1029 09:28:38.963899  148690 cri.go:89] found id: ""
	I1029 09:28:38.963908  148690 logs.go:282] 1 containers: [2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4]
	I1029 09:28:38.963964  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:38.968350  148690 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1029 09:28:38.968423  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1029 09:28:38.999073  148690 cri.go:89] found id: ""
	I1029 09:28:38.999098  148690 logs.go:282] 0 containers: []
	W1029 09:28:38.999106  148690 logs.go:284] No container was found matching "etcd"
	I1029 09:28:38.999113  148690 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1029 09:28:38.999195  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1029 09:28:39.029402  148690 cri.go:89] found id: ""
	I1029 09:28:39.029425  148690 logs.go:282] 0 containers: []
	W1029 09:28:39.029434  148690 logs.go:284] No container was found matching "coredns"
	I1029 09:28:39.029441  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1029 09:28:39.029498  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1029 09:28:39.055884  148690 cri.go:89] found id: "a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3"
	I1029 09:28:39.055904  148690 cri.go:89] found id: ""
	I1029 09:28:39.055912  148690 logs.go:282] 1 containers: [a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3]
	I1029 09:28:39.055975  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:39.059869  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1029 09:28:39.059980  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1029 09:28:39.087338  148690 cri.go:89] found id: ""
	I1029 09:28:39.087401  148690 logs.go:282] 0 containers: []
	W1029 09:28:39.087424  148690 logs.go:284] No container was found matching "kube-proxy"
	I1029 09:28:39.087450  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1029 09:28:39.087528  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1029 09:28:39.113095  148690 cri.go:89] found id: "431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa"
	I1029 09:28:39.113116  148690 cri.go:89] found id: ""
	I1029 09:28:39.113124  148690 logs.go:282] 1 containers: [431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa]
	I1029 09:28:39.113205  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:39.117033  148690 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1029 09:28:39.117205  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1029 09:28:39.150763  148690 cri.go:89] found id: ""
	I1029 09:28:39.150787  148690 logs.go:282] 0 containers: []
	W1029 09:28:39.150795  148690 logs.go:284] No container was found matching "kindnet"
	I1029 09:28:39.150802  148690 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1029 09:28:39.150857  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1029 09:28:39.189925  148690 cri.go:89] found id: ""
	I1029 09:28:39.189955  148690 logs.go:282] 0 containers: []
	W1029 09:28:39.189976  148690 logs.go:284] No container was found matching "storage-provisioner"
	I1029 09:28:39.189986  148690 logs.go:123] Gathering logs for kube-apiserver [2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4] ...
	I1029 09:28:39.190004  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4"
	I1029 09:28:39.224620  148690 logs.go:123] Gathering logs for kube-scheduler [a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3] ...
	I1029 09:28:39.224649  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3"
	I1029 09:28:39.310379  148690 logs.go:123] Gathering logs for kube-controller-manager [431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa] ...
	I1029 09:28:39.310415  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa"
	I1029 09:28:39.339506  148690 logs.go:123] Gathering logs for CRI-O ...
	I1029 09:28:39.339536  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1029 09:28:39.407883  148690 logs.go:123] Gathering logs for container status ...
	I1029 09:28:39.407922  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1029 09:28:39.441986  148690 logs.go:123] Gathering logs for kubelet ...
	I1029 09:28:39.442016  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1029 09:28:39.584232  148690 logs.go:123] Gathering logs for dmesg ...
	I1029 09:28:39.584323  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1029 09:28:39.603301  148690 logs.go:123] Gathering logs for describe nodes ...
	I1029 09:28:39.603383  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1029 09:28:39.688150  148690 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1029 09:28:38.896961  164763 pod_ready.go:104] pod "coredns-66bc5c9577-tkwf6" is not "Ready", error: <nil>
	I1029 09:28:39.897522  164763 pod_ready.go:94] pod "coredns-66bc5c9577-tkwf6" is "Ready"
	I1029 09:28:39.897552  164763 pod_ready.go:86] duration metric: took 5.005553414s for pod "coredns-66bc5c9577-tkwf6" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:28:39.900077  164763 pod_ready.go:83] waiting for pod "etcd-pause-598473" in "kube-system" namespace to be "Ready" or be gone ...
	W1029 09:28:41.906276  164763 pod_ready.go:104] pod "etcd-pause-598473" is not "Ready", error: <nil>
	I1029 09:28:42.189673  148690 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1029 09:28:42.190191  148690 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:28:42.190250  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1029 09:28:42.190358  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1029 09:28:42.230916  148690 cri.go:89] found id: "2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4"
	I1029 09:28:42.230991  148690 cri.go:89] found id: ""
	I1029 09:28:42.231006  148690 logs.go:282] 1 containers: [2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4]
	I1029 09:28:42.231070  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:42.235840  148690 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1029 09:28:42.235921  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1029 09:28:42.266560  148690 cri.go:89] found id: ""
	I1029 09:28:42.266582  148690 logs.go:282] 0 containers: []
	W1029 09:28:42.266590  148690 logs.go:284] No container was found matching "etcd"
	I1029 09:28:42.266597  148690 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1029 09:28:42.266661  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1029 09:28:42.302224  148690 cri.go:89] found id: ""
	I1029 09:28:42.302253  148690 logs.go:282] 0 containers: []
	W1029 09:28:42.302262  148690 logs.go:284] No container was found matching "coredns"
	I1029 09:28:42.302269  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1029 09:28:42.302329  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1029 09:28:42.330749  148690 cri.go:89] found id: "a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3"
	I1029 09:28:42.330773  148690 cri.go:89] found id: ""
	I1029 09:28:42.330781  148690 logs.go:282] 1 containers: [a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3]
	I1029 09:28:42.330834  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:42.334842  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1029 09:28:42.334924  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1029 09:28:42.362705  148690 cri.go:89] found id: ""
	I1029 09:28:42.362729  148690 logs.go:282] 0 containers: []
	W1029 09:28:42.362737  148690 logs.go:284] No container was found matching "kube-proxy"
	I1029 09:28:42.362745  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1029 09:28:42.362821  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1029 09:28:42.390868  148690 cri.go:89] found id: "431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa"
	I1029 09:28:42.390892  148690 cri.go:89] found id: ""
	I1029 09:28:42.390900  148690 logs.go:282] 1 containers: [431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa]
	I1029 09:28:42.390977  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:42.394914  148690 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1029 09:28:42.395028  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1029 09:28:42.435556  148690 cri.go:89] found id: ""
	I1029 09:28:42.435581  148690 logs.go:282] 0 containers: []
	W1029 09:28:42.435590  148690 logs.go:284] No container was found matching "kindnet"
	I1029 09:28:42.435597  148690 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1029 09:28:42.435659  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1029 09:28:42.473572  148690 cri.go:89] found id: ""
	I1029 09:28:42.473598  148690 logs.go:282] 0 containers: []
	W1029 09:28:42.473608  148690 logs.go:284] No container was found matching "storage-provisioner"
	I1029 09:28:42.473616  148690 logs.go:123] Gathering logs for kubelet ...
	I1029 09:28:42.473628  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1029 09:28:42.609657  148690 logs.go:123] Gathering logs for dmesg ...
	I1029 09:28:42.609695  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1029 09:28:42.624723  148690 logs.go:123] Gathering logs for describe nodes ...
	I1029 09:28:42.624760  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1029 09:28:42.693056  148690 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1029 09:28:42.693075  148690 logs.go:123] Gathering logs for kube-apiserver [2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4] ...
	I1029 09:28:42.693089  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4"
	I1029 09:28:42.726495  148690 logs.go:123] Gathering logs for kube-scheduler [a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3] ...
	I1029 09:28:42.726531  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3"
	I1029 09:28:42.790242  148690 logs.go:123] Gathering logs for kube-controller-manager [431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa] ...
	I1029 09:28:42.790276  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa"
	I1029 09:28:42.817886  148690 logs.go:123] Gathering logs for CRI-O ...
	I1029 09:28:42.817915  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1029 09:28:42.882476  148690 logs.go:123] Gathering logs for container status ...
	I1029 09:28:42.882516  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1029 09:28:45.426739  148690 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1029 09:28:45.427130  148690 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:28:45.427177  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1029 09:28:45.427232  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1029 09:28:45.454219  148690 cri.go:89] found id: "2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4"
	I1029 09:28:45.454239  148690 cri.go:89] found id: ""
	I1029 09:28:45.454247  148690 logs.go:282] 1 containers: [2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4]
	I1029 09:28:45.454303  148690 ssh_runner.go:195] Run: which crictl
	W1029 09:28:43.906465  164763 pod_ready.go:104] pod "etcd-pause-598473" is not "Ready", error: <nil>
	W1029 09:28:45.907201  164763 pod_ready.go:104] pod "etcd-pause-598473" is not "Ready", error: <nil>
	I1029 09:28:46.405921  164763 pod_ready.go:94] pod "etcd-pause-598473" is "Ready"
	I1029 09:28:46.405951  164763 pod_ready.go:86] duration metric: took 6.505851666s for pod "etcd-pause-598473" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:28:46.408360  164763 pod_ready.go:83] waiting for pod "kube-apiserver-pause-598473" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:28:46.412664  164763 pod_ready.go:94] pod "kube-apiserver-pause-598473" is "Ready"
	I1029 09:28:46.412688  164763 pod_ready.go:86] duration metric: took 4.300265ms for pod "kube-apiserver-pause-598473" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:28:46.414671  164763 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-598473" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:28:46.418935  164763 pod_ready.go:94] pod "kube-controller-manager-pause-598473" is "Ready"
	I1029 09:28:46.418964  164763 pod_ready.go:86] duration metric: took 4.265262ms for pod "kube-controller-manager-pause-598473" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:28:46.421055  164763 pod_ready.go:83] waiting for pod "kube-proxy-tjggg" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:28:46.605237  164763 pod_ready.go:94] pod "kube-proxy-tjggg" is "Ready"
	I1029 09:28:46.605274  164763 pod_ready.go:86] duration metric: took 184.181374ms for pod "kube-proxy-tjggg" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:28:46.804480  164763 pod_ready.go:83] waiting for pod "kube-scheduler-pause-598473" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:28:47.205366  164763 pod_ready.go:94] pod "kube-scheduler-pause-598473" is "Ready"
	I1029 09:28:47.205392  164763 pod_ready.go:86] duration metric: took 400.883438ms for pod "kube-scheduler-pause-598473" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:28:47.205403  164763 pod_ready.go:40] duration metric: took 12.317034406s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:28:47.264684  164763 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1029 09:28:47.267824  164763 out.go:179] * Done! kubectl is now configured to use "pause-598473" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 29 09:28:27 pause-598473 crio[2078]: time="2025-10-29T09:28:27.677300504Z" level=info msg="Starting container: a40e8b1aec6cf4ffb89caaf694a74a0457d97ce61bc68f6103e3910054c0228b" id=f215c47b-5346-4366-b1e8-4735ed9e043d name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:28:27 pause-598473 crio[2078]: time="2025-10-29T09:28:27.687015134Z" level=info msg="Started container" PID=2177 containerID=e240f2f193b4ad7983ce46038c61646263f3c3252a816cfb9eb501adbc10637f description=kube-system/kube-scheduler-pause-598473/kube-scheduler id=4ac0e04a-31f1-4187-8760-7f079d31b187 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0a040c4a177bf9ee8d244be1913d81a25f896f98f02febb02b7fab83d49493eb
	Oct 29 09:28:27 pause-598473 crio[2078]: time="2025-10-29T09:28:27.700243234Z" level=info msg="Started container" PID=2196 containerID=99ffc1b8e15dbc056c0325ee48e8afa68322220323c386b6bedb1c4a2ee5e455 description=kube-system/coredns-66bc5c9577-tkwf6/coredns id=9ce0a19c-2653-4567-a448-e7e96dbc8739 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3b2989e43fe4c1bdc1b7ea5944b350b5906a567b663e7eb596b9de3e752480b3
	Oct 29 09:28:27 pause-598473 crio[2078]: time="2025-10-29T09:28:27.711090402Z" level=info msg="Started container" PID=2206 containerID=a40e8b1aec6cf4ffb89caaf694a74a0457d97ce61bc68f6103e3910054c0228b description=kube-system/kindnet-g6xj4/kindnet-cni id=f215c47b-5346-4366-b1e8-4735ed9e043d name=/runtime.v1.RuntimeService/StartContainer sandboxID=02b7005eb7e20b0048cf4c4ddf689b3a8893f688795655c73399cf993bbe198f
	Oct 29 09:28:27 pause-598473 crio[2078]: time="2025-10-29T09:28:27.735784727Z" level=info msg="Created container 7dda4d8e9247e4c48e0541c08b16da24318bbccc701f139085f1241779fd5f7c: kube-system/kube-apiserver-pause-598473/kube-apiserver" id=97c0c8a5-be04-406a-9814-d03a5f7685d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:28:27 pause-598473 crio[2078]: time="2025-10-29T09:28:27.736619769Z" level=info msg="Starting container: 7dda4d8e9247e4c48e0541c08b16da24318bbccc701f139085f1241779fd5f7c" id=d561dbf1-dd68-4e7d-8e26-f034951d0ad5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:28:27 pause-598473 crio[2078]: time="2025-10-29T09:28:27.738627028Z" level=info msg="Started container" PID=2224 containerID=7dda4d8e9247e4c48e0541c08b16da24318bbccc701f139085f1241779fd5f7c description=kube-system/kube-apiserver-pause-598473/kube-apiserver id=d561dbf1-dd68-4e7d-8e26-f034951d0ad5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b48c4155ce95656fdde97a5e70275c16a46ca3ab3119d3630dc01182571ef2b1
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.061049134Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.064890877Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.064928235Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.064960112Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.069990114Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.070173812Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.0702706Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.073864563Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.073901461Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.073928506Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.077528089Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.077570493Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.07759492Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.082050789Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.08209307Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.08211945Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.085672731Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.085717121Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	7dda4d8e9247e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   22 seconds ago       Running             kube-apiserver            1                   b48c4155ce956       kube-apiserver-pause-598473            kube-system
	a40e8b1aec6cf       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   22 seconds ago       Running             kindnet-cni               1                   02b7005eb7e20       kindnet-g6xj4                          kube-system
	99ffc1b8e15db       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   22 seconds ago       Running             coredns                   1                   3b2989e43fe4c       coredns-66bc5c9577-tkwf6               kube-system
	e240f2f193b4a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   22 seconds ago       Running             kube-scheduler            1                   0a040c4a177bf       kube-scheduler-pause-598473            kube-system
	b54a7a1da46f4       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   22 seconds ago       Running             kube-proxy                1                   6b51b34e04bb0       kube-proxy-tjggg                       kube-system
	3f24d3b0d1715       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   22 seconds ago       Running             kube-controller-manager   1                   4250985715d0c       kube-controller-manager-pause-598473   kube-system
	7257e194f3686       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   22 seconds ago       Running             etcd                      1                   bfcf114256b60       etcd-pause-598473                      kube-system
	d384a6fc7e5d0       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   33 seconds ago       Exited              coredns                   0                   3b2989e43fe4c       coredns-66bc5c9577-tkwf6               kube-system
	c190688eeeb79       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   6b51b34e04bb0       kube-proxy-tjggg                       kube-system
	8747eed7a2764       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   02b7005eb7e20       kindnet-g6xj4                          kube-system
	ca16a1729c769       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   0a040c4a177bf       kube-scheduler-pause-598473            kube-system
	8027a2710b597       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   b48c4155ce956       kube-apiserver-pause-598473            kube-system
	331659622ea96       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   4250985715d0c       kube-controller-manager-pause-598473   kube-system
	2413d471a2a42       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   bfcf114256b60       etcd-pause-598473                      kube-system
	
	
	==> coredns [99ffc1b8e15dbc056c0325ee48e8afa68322220323c386b6bedb1c4a2ee5e455] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46759 - 46793 "HINFO IN 8079236798295514612.676057281158168028. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013039044s
	
	
	==> coredns [d384a6fc7e5d0182de7245d870f2c33ac8483358e6f6ac6db5e18ba13fa7d9d8] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59424 - 18203 "HINFO IN 6784855079271061273.5378442763212798112. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024337192s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-598473
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-598473
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=pause-598473
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_27_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:27:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-598473
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:28:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:28:15 +0000   Wed, 29 Oct 2025 09:27:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:28:15 +0000   Wed, 29 Oct 2025 09:27:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:28:15 +0000   Wed, 29 Oct 2025 09:27:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 09:28:15 +0000   Wed, 29 Oct 2025 09:28:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-598473
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                d91088ba-c87b-4c03-af8f-a05de72276c1
	  Boot ID:                    dcb5a4aa-84b0-4edf-aeb7-a96ccf6c0882
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-tkwf6                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     76s
	  kube-system                 etcd-pause-598473                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         81s
	  kube-system                 kindnet-g6xj4                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      76s
	  kube-system                 kube-apiserver-pause-598473             250m (12%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-controller-manager-pause-598473    200m (10%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-proxy-tjggg                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-scheduler-pause-598473             100m (5%)     0 (0%)      0 (0%)           0 (0%)         81s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 74s   kube-proxy       
	  Normal   Starting                 16s   kube-proxy       
	  Normal   Starting                 81s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 81s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  81s   kubelet          Node pause-598473 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    81s   kubelet          Node pause-598473 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     81s   kubelet          Node pause-598473 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           77s   node-controller  Node pause-598473 event: Registered Node pause-598473 in Controller
	  Normal   NodeReady                35s   kubelet          Node pause-598473 status is now: NodeReady
	  Normal   RegisteredNode           14s   node-controller  Node pause-598473 event: Registered Node pause-598473 in Controller
	
	
	==> dmesg <==
	[Oct29 08:48] overlayfs: idmapped layers are currently not supported
	[Oct29 08:56] overlayfs: idmapped layers are currently not supported
	[  +3.225081] overlayfs: idmapped layers are currently not supported
	[Oct29 08:57] overlayfs: idmapped layers are currently not supported
	[Oct29 08:58] overlayfs: idmapped layers are currently not supported
	[Oct29 08:59] overlayfs: idmapped layers are currently not supported
	[Oct29 09:04] overlayfs: idmapped layers are currently not supported
	[Oct29 09:05] overlayfs: idmapped layers are currently not supported
	[Oct29 09:06] overlayfs: idmapped layers are currently not supported
	[Oct29 09:07] overlayfs: idmapped layers are currently not supported
	[Oct29 09:08] overlayfs: idmapped layers are currently not supported
	[Oct29 09:10] overlayfs: idmapped layers are currently not supported
	[ +24.018500] overlayfs: idmapped layers are currently not supported
	[  +4.070732] overlayfs: idmapped layers are currently not supported
	[Oct29 09:11] overlayfs: idmapped layers are currently not supported
	[ +18.424492] overlayfs: idmapped layers are currently not supported
	[  +4.342269] hrtimer: interrupt took 2289025 ns
	[Oct29 09:12] overlayfs: idmapped layers are currently not supported
	[Oct29 09:13] overlayfs: idmapped layers are currently not supported
	[Oct29 09:14] overlayfs: idmapped layers are currently not supported
	[Oct29 09:20] overlayfs: idmapped layers are currently not supported
	[Oct29 09:23] overlayfs: idmapped layers are currently not supported
	[Oct29 09:24] overlayfs: idmapped layers are currently not supported
	[ +30.917844] overlayfs: idmapped layers are currently not supported
	[Oct29 09:27] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [2413d471a2a4209e069fb08050610258c0805f09213c5cf465ffa1c188508fa8] <==
	{"level":"warn","ts":"2025-10-29T09:27:25.398272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:27:25.429190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:27:25.478134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:27:25.494991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:27:25.524873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:27:25.557526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:27:25.688653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59266","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-29T09:28:20.191702Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-29T09:28:20.191758Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-598473","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-10-29T09:28:20.191855Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-29T09:28:20.341585Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-29T09:28:20.341691Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-29T09:28:20.341727Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-10-29T09:28:20.341818Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-29T09:28:20.341838Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-29T09:28:20.341912Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-29T09:28:20.341981Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-29T09:28:20.342015Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-29T09:28:20.342083Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-29T09:28:20.342095Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-29T09:28:20.342103Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-29T09:28:20.345023Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-10-29T09:28:20.345101Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-29T09:28:20.345173Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-29T09:28:20.345200Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-598473","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [7257e194f3686f6d742fd1cd0d89139b8bd26bf067856ef661f029216e99b096] <==
	{"level":"warn","ts":"2025-10-29T09:28:31.656929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:31.681995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:31.703173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:31.715478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:31.734455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:31.749565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:31.799134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:31.800448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:31.831591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:31.861193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:31.877086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:31.889629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:31.906732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:31.928697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:31.953993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:31.973812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:32.000140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:32.015623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:32.050288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:32.113790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:32.133015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:32.182549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:32.215015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:32.230912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:32.418506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45656","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:28:50 up  1:11,  0 user,  load average: 2.22, 2.64, 2.03
	Linux pause-598473 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8747eed7a27641339a70bdff96979ff32978a82c63e891fbc1950d2e489f7e1c] <==
	I1029 09:27:35.453934       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:27:35.454181       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1029 09:27:35.454363       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:27:35.454385       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:27:35.454399       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:27:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:27:35.655381       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:27:35.655410       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:27:35.655419       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:27:35.655517       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1029 09:28:05.655030       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1029 09:28:05.655229       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1029 09:28:05.655347       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1029 09:28:05.745907       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1029 09:28:06.955865       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 09:28:06.955897       1 metrics.go:72] Registering metrics
	I1029 09:28:06.955968       1 controller.go:711] "Syncing nftables rules"
	I1029 09:28:15.661431       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1029 09:28:15.661487       1 main.go:301] handling current node
	
	
	==> kindnet [a40e8b1aec6cf4ffb89caaf694a74a0457d97ce61bc68f6103e3910054c0228b] <==
	I1029 09:28:27.814198       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:28:27.846073       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1029 09:28:27.846202       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:28:27.846214       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:28:27.846228       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:28:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:28:28.057538       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:28:28.057602       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:28:28.057662       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:28:28.058407       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1029 09:28:33.958718       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 09:28:33.958813       1 metrics.go:72] Registering metrics
	I1029 09:28:33.958903       1 controller.go:711] "Syncing nftables rules"
	I1029 09:28:38.060516       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1029 09:28:38.060699       1 main.go:301] handling current node
	I1029 09:28:48.057922       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1029 09:28:48.057954       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7dda4d8e9247e4c48e0541c08b16da24318bbccc701f139085f1241779fd5f7c] <==
	I1029 09:28:33.873609       1 policy_source.go:240] refreshing policies
	I1029 09:28:33.874774       1 aggregator.go:171] initial CRD sync complete...
	I1029 09:28:33.874843       1 autoregister_controller.go:144] Starting autoregister controller
	I1029 09:28:33.874891       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1029 09:28:33.902222       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1029 09:28:33.924178       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1029 09:28:33.924247       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1029 09:28:33.933928       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1029 09:28:33.949722       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1029 09:28:33.953659       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1029 09:28:33.956662       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1029 09:28:33.957610       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1029 09:28:33.957632       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1029 09:28:33.958165       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1029 09:28:33.958353       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1029 09:28:33.963888       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1029 09:28:33.964510       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1029 09:28:33.965792       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:28:33.976516       1 cache.go:39] Caches are synced for autoregister controller
	I1029 09:28:34.371514       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:28:34.708918       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 09:28:36.267389       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1029 09:28:36.302210       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1029 09:28:36.450983       1 controller.go:667] quota admission added evaluator for: endpoints
	I1029 09:28:36.504478       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [8027a2710b597803179f3d65c316ab838f3c511999b97880ec0b1f59441db3cd] <==
	W1029 09:28:20.209469       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.209544       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.209622       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.209702       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.209888       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.209978       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210035       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210087       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210137       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210184       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210233       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210285       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210334       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210383       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210406       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210429       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210456       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210477       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210502       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210546       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210585       1 logging.go:55] [core] [Channel #17 SubChannel #21]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210610       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210642       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210586       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [331659622ea96eb65f7a270e3e1d8f8fa9f2d2eddfd4e3e8bba99a26abb753dd] <==
	I1029 09:27:33.545071       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1029 09:27:33.545699       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1029 09:27:33.546148       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1029 09:27:33.546308       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1029 09:27:33.546398       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1029 09:27:33.546668       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1029 09:27:33.546858       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1029 09:27:33.546901       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1029 09:27:33.548383       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1029 09:27:33.548460       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1029 09:27:33.548470       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1029 09:27:33.552120       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1029 09:27:33.552395       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1029 09:27:33.552654       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:27:33.554961       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1029 09:27:33.555097       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1029 09:27:33.555160       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1029 09:27:33.555196       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1029 09:27:33.555224       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1029 09:27:33.560640       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1029 09:27:33.561640       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1029 09:27:33.564369       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1029 09:27:33.573140       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1029 09:27:33.576668       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-598473" podCIDRs=["10.244.0.0/24"]
	I1029 09:28:18.509243       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [3f24d3b0d17159e35ec8ac73b72ecde2d13c87c4ce788a4d8aece1755628f8b4] <==
	I1029 09:28:36.204020       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1029 09:28:36.204064       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1029 09:28:36.208803       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1029 09:28:36.209125       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1029 09:28:36.209436       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1029 09:28:36.209558       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-598473"
	I1029 09:28:36.209636       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1029 09:28:36.210192       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1029 09:28:36.212439       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1029 09:28:36.212897       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:28:36.213909       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:28:36.217910       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:28:36.221979       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1029 09:28:36.232259       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1029 09:28:36.232438       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1029 09:28:36.234368       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1029 09:28:36.236402       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1029 09:28:36.244460       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1029 09:28:36.244492       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1029 09:28:36.244573       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1029 09:28:36.244864       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1029 09:28:36.255772       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1029 09:28:36.258028       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1029 09:28:36.261699       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1029 09:28:36.275449       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	
	
	==> kube-proxy [b54a7a1da46f4878031777e1d18042b4b4bba0e73a5204cb18e65a98dfe4bf56] <==
	I1029 09:28:30.907604       1 server_linux.go:53] "Using iptables proxy"
	I1029 09:28:31.543690       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 09:28:34.040880       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 09:28:34.040996       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1029 09:28:34.041277       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 09:28:34.169545       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:28:34.169674       1 server_linux.go:132] "Using iptables Proxier"
	I1029 09:28:34.177604       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 09:28:34.178011       1 server.go:527] "Version info" version="v1.34.1"
	I1029 09:28:34.178243       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:28:34.180196       1 config.go:200] "Starting service config controller"
	I1029 09:28:34.180220       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 09:28:34.180245       1 config.go:106] "Starting endpoint slice config controller"
	I1029 09:28:34.180251       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 09:28:34.180263       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 09:28:34.180267       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 09:28:34.181016       1 config.go:309] "Starting node config controller"
	I1029 09:28:34.181039       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 09:28:34.181045       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 09:28:34.281127       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 09:28:34.281232       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 09:28:34.281325       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [c190688eeeb79e9c923c6ec33de1858543704894afd50ecdb214f8e4111e298c] <==
	I1029 09:27:35.490812       1 server_linux.go:53] "Using iptables proxy"
	I1029 09:27:35.578117       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 09:27:35.679096       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 09:27:35.679133       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1029 09:27:35.679222       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 09:27:35.699225       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:27:35.699279       1 server_linux.go:132] "Using iptables Proxier"
	I1029 09:27:35.702815       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 09:27:35.703105       1 server.go:527] "Version info" version="v1.34.1"
	I1029 09:27:35.703127       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:27:35.711056       1 config.go:200] "Starting service config controller"
	I1029 09:27:35.711134       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 09:27:35.711176       1 config.go:106] "Starting endpoint slice config controller"
	I1029 09:27:35.711203       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 09:27:35.711239       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 09:27:35.711265       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 09:27:35.711788       1 config.go:309] "Starting node config controller"
	I1029 09:27:35.711854       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 09:27:35.711884       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 09:27:35.811337       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 09:27:35.811337       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 09:27:35.811356       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ca16a1729c7691f2ea4057d58e8323e20627757b080269568c8ba95cd450fa92] <==
	E1029 09:27:26.700946       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1029 09:27:26.700978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1029 09:27:26.701016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1029 09:27:26.701053       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1029 09:27:26.701086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1029 09:27:26.701118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1029 09:27:26.701192       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1029 09:27:27.593778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1029 09:27:27.653464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1029 09:27:27.657790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1029 09:27:27.662683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1029 09:27:27.676596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1029 09:27:27.709979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1029 09:27:27.723507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1029 09:27:27.779405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1029 09:27:27.866203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1029 09:27:28.041053       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1029 09:27:28.041975       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1029 09:27:31.079177       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:28:20.199889       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1029 09:28:20.199916       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1029 09:28:20.199936       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1029 09:28:20.199962       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:28:20.200154       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1029 09:28:20.200168       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e240f2f193b4ad7983ce46038c61646263f3c3252a816cfb9eb501adbc10637f] <==
	I1029 09:28:31.210010       1 serving.go:386] Generated self-signed cert in-memory
	I1029 09:28:34.119592       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1029 09:28:34.119815       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:28:34.129040       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1029 09:28:34.129123       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1029 09:28:34.129154       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1029 09:28:34.129192       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1029 09:28:34.130098       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:28:34.130122       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:28:34.130147       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1029 09:28:34.130153       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1029 09:28:34.230099       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1029 09:28:34.230391       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1029 09:28:34.230454       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 09:28:27 pause-598473 kubelet[1324]: E1029 09:28:27.400011    1324 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-598473\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="aa94bcd67947651441aca381d72c4325" pod="kube-system/kube-apiserver-pause-598473"
	Oct 29 09:28:27 pause-598473 kubelet[1324]: E1029 09:28:27.400400    1324 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-598473\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="dc7b8d07e8fe613d9df9fb2b0671eba8" pod="kube-system/kube-controller-manager-pause-598473"
	Oct 29 09:28:27 pause-598473 kubelet[1324]: E1029 09:28:27.400715    1324 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-g6xj4\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="73a37546-9547-4ab6-a47d-2ba7197a11f5" pod="kube-system/kindnet-g6xj4"
	Oct 29 09:28:27 pause-598473 kubelet[1324]: E1029 09:28:27.401024    1324 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tjggg\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d87db520-c253-4583-9374-28fcc707d1dd" pod="kube-system/kube-proxy-tjggg"
	Oct 29 09:28:27 pause-598473 kubelet[1324]: E1029 09:28:27.401326    1324 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-tkwf6\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="8d843afb-d055-43fc-92e1-8816da3ab88b" pod="kube-system/coredns-66bc5c9577-tkwf6"
	Oct 29 09:28:27 pause-598473 kubelet[1324]: E1029 09:28:27.401631    1324 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-598473\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fbf0ebf2a238ffdd0e89a1759ec74d86" pod="kube-system/kube-scheduler-pause-598473"
	Oct 29 09:28:27 pause-598473 kubelet[1324]: I1029 09:28:27.428210    1324 scope.go:117] "RemoveContainer" containerID="8027a2710b597803179f3d65c316ab838f3c511999b97880ec0b1f59441db3cd"
	Oct 29 09:28:27 pause-598473 kubelet[1324]: E1029 09:28:27.429034    1324 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-598473\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3948f65fe872b1a95aa82526180d497a" pod="kube-system/etcd-pause-598473"
	Oct 29 09:28:27 pause-598473 kubelet[1324]: E1029 09:28:27.429311    1324 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-598473\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="aa94bcd67947651441aca381d72c4325" pod="kube-system/kube-apiserver-pause-598473"
	Oct 29 09:28:27 pause-598473 kubelet[1324]: E1029 09:28:27.429530    1324 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-598473\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="dc7b8d07e8fe613d9df9fb2b0671eba8" pod="kube-system/kube-controller-manager-pause-598473"
	Oct 29 09:28:27 pause-598473 kubelet[1324]: E1029 09:28:27.429775    1324 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-g6xj4\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="73a37546-9547-4ab6-a47d-2ba7197a11f5" pod="kube-system/kindnet-g6xj4"
	Oct 29 09:28:27 pause-598473 kubelet[1324]: E1029 09:28:27.430610    1324 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tjggg\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d87db520-c253-4583-9374-28fcc707d1dd" pod="kube-system/kube-proxy-tjggg"
	Oct 29 09:28:27 pause-598473 kubelet[1324]: E1029 09:28:27.431006    1324 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-tkwf6\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="8d843afb-d055-43fc-92e1-8816da3ab88b" pod="kube-system/coredns-66bc5c9577-tkwf6"
	Oct 29 09:28:27 pause-598473 kubelet[1324]: E1029 09:28:27.431197    1324 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-598473\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fbf0ebf2a238ffdd0e89a1759ec74d86" pod="kube-system/kube-scheduler-pause-598473"
	Oct 29 09:28:33 pause-598473 kubelet[1324]: E1029 09:28:33.440938    1324 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-598473\" is forbidden: User \"system:node:pause-598473\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-598473' and this object" podUID="fbf0ebf2a238ffdd0e89a1759ec74d86" pod="kube-system/kube-scheduler-pause-598473"
	Oct 29 09:28:33 pause-598473 kubelet[1324]: E1029 09:28:33.441731    1324 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-598473\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-598473' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 29 09:28:33 pause-598473 kubelet[1324]: E1029 09:28:33.441876    1324 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-598473\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-598473' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 29 09:28:33 pause-598473 kubelet[1324]: E1029 09:28:33.441979    1324 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-598473\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-598473' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 29 09:28:33 pause-598473 kubelet[1324]: E1029 09:28:33.585829    1324 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-598473\" is forbidden: User \"system:node:pause-598473\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-598473' and this object" podUID="3948f65fe872b1a95aa82526180d497a" pod="kube-system/etcd-pause-598473"
	Oct 29 09:28:33 pause-598473 kubelet[1324]: E1029 09:28:33.776682    1324 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-598473\" is forbidden: User \"system:node:pause-598473\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-598473' and this object" podUID="aa94bcd67947651441aca381d72c4325" pod="kube-system/kube-apiserver-pause-598473"
	Oct 29 09:28:33 pause-598473 kubelet[1324]: E1029 09:28:33.859630    1324 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-598473\" is forbidden: User \"system:node:pause-598473\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-598473' and this object" podUID="dc7b8d07e8fe613d9df9fb2b0671eba8" pod="kube-system/kube-controller-manager-pause-598473"
	Oct 29 09:28:39 pause-598473 kubelet[1324]: W1029 09:28:39.492236    1324 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 29 09:28:47 pause-598473 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 29 09:28:47 pause-598473 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 29 09:28:47 pause-598473 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-598473 -n pause-598473
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-598473 -n pause-598473: exit status 2 (351.767655ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-598473 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-598473
helpers_test.go:243: (dbg) docker inspect pause-598473:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "32e47a56cf8ab05cf9994e7c62c233a4ba19d5f1ec55842c95064de193af98a0",
	        "Created": "2025-10-29T09:26:58.905575052Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 160735,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:26:58.978580165Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/32e47a56cf8ab05cf9994e7c62c233a4ba19d5f1ec55842c95064de193af98a0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/32e47a56cf8ab05cf9994e7c62c233a4ba19d5f1ec55842c95064de193af98a0/hostname",
	        "HostsPath": "/var/lib/docker/containers/32e47a56cf8ab05cf9994e7c62c233a4ba19d5f1ec55842c95064de193af98a0/hosts",
	        "LogPath": "/var/lib/docker/containers/32e47a56cf8ab05cf9994e7c62c233a4ba19d5f1ec55842c95064de193af98a0/32e47a56cf8ab05cf9994e7c62c233a4ba19d5f1ec55842c95064de193af98a0-json.log",
	        "Name": "/pause-598473",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-598473:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-598473",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "32e47a56cf8ab05cf9994e7c62c233a4ba19d5f1ec55842c95064de193af98a0",
	                "LowerDir": "/var/lib/docker/overlay2/a09c9712819c6811876968f80ba563aab48fad4af9b923c856d2c8fa5028f37d-init/diff:/var/lib/docker/overlay2/512c003c31c6c889d7180677d0a7d04f9641d651bb7c0f829b95ca7e47c2836c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a09c9712819c6811876968f80ba563aab48fad4af9b923c856d2c8fa5028f37d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a09c9712819c6811876968f80ba563aab48fad4af9b923c856d2c8fa5028f37d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a09c9712819c6811876968f80ba563aab48fad4af9b923c856d2c8fa5028f37d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-598473",
	                "Source": "/var/lib/docker/volumes/pause-598473/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-598473",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-598473",
	                "name.minikube.sigs.k8s.io": "pause-598473",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "30649ff1508c3d864430c8b7e4ba3545026451f470a8c958ad950e7003299a49",
	            "SandboxKey": "/var/run/docker/netns/30649ff1508c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33018"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33019"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33022"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33020"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33021"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-598473": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:02:6a:f9:34:40",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8d4a9121a7d7bab15b4a2c83c57c976ec0f3673a69773eaeac6fff0d9a3417cc",
	                    "EndpointID": "0215fbac5c98bb0f8dd4a60c0e198ffe3c0cca391897078039c39584a5c127fc",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-598473",
	                        "32e47a56cf8a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-598473 -n pause-598473
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-598473 -n pause-598473: exit status 2 (337.972468ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-598473 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-598473 logs -n 25: (1.554129377s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-988770 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-988770       │ jenkins │ v1.37.0 │ 29 Oct 25 09:22 UTC │ 29 Oct 25 09:23 UTC │
	│ start   │ -p missing-upgrade-648122 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-648122    │ jenkins │ v1.32.0 │ 29 Oct 25 09:22 UTC │ 29 Oct 25 09:23 UTC │
	│ start   │ -p NoKubernetes-988770 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-988770       │ jenkins │ v1.37.0 │ 29 Oct 25 09:23 UTC │ 29 Oct 25 09:23 UTC │
	│ delete  │ -p NoKubernetes-988770                                                                                                                   │ NoKubernetes-988770       │ jenkins │ v1.37.0 │ 29 Oct 25 09:23 UTC │ 29 Oct 25 09:23 UTC │
	│ start   │ -p NoKubernetes-988770 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-988770       │ jenkins │ v1.37.0 │ 29 Oct 25 09:23 UTC │ 29 Oct 25 09:23 UTC │
	│ ssh     │ -p NoKubernetes-988770 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-988770       │ jenkins │ v1.37.0 │ 29 Oct 25 09:23 UTC │                     │
	│ stop    │ -p NoKubernetes-988770                                                                                                                   │ NoKubernetes-988770       │ jenkins │ v1.37.0 │ 29 Oct 25 09:23 UTC │ 29 Oct 25 09:23 UTC │
	│ start   │ -p NoKubernetes-988770 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-988770       │ jenkins │ v1.37.0 │ 29 Oct 25 09:23 UTC │ 29 Oct 25 09:23 UTC │
	│ start   │ -p missing-upgrade-648122 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-648122    │ jenkins │ v1.37.0 │ 29 Oct 25 09:23 UTC │ 29 Oct 25 09:24 UTC │
	│ ssh     │ -p NoKubernetes-988770 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-988770       │ jenkins │ v1.37.0 │ 29 Oct 25 09:23 UTC │                     │
	│ delete  │ -p NoKubernetes-988770                                                                                                                   │ NoKubernetes-988770       │ jenkins │ v1.37.0 │ 29 Oct 25 09:23 UTC │ 29 Oct 25 09:23 UTC │
	│ start   │ -p kubernetes-upgrade-392485 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-392485 │ jenkins │ v1.37.0 │ 29 Oct 25 09:23 UTC │ 29 Oct 25 09:24 UTC │
	│ stop    │ -p kubernetes-upgrade-392485                                                                                                             │ kubernetes-upgrade-392485 │ jenkins │ v1.37.0 │ 29 Oct 25 09:24 UTC │ 29 Oct 25 09:24 UTC │
	│ start   │ -p kubernetes-upgrade-392485 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-392485 │ jenkins │ v1.37.0 │ 29 Oct 25 09:24 UTC │                     │
	│ delete  │ -p missing-upgrade-648122                                                                                                                │ missing-upgrade-648122    │ jenkins │ v1.37.0 │ 29 Oct 25 09:24 UTC │ 29 Oct 25 09:24 UTC │
	│ start   │ -p stopped-upgrade-802711 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-802711    │ jenkins │ v1.32.0 │ 29 Oct 25 09:25 UTC │ 29 Oct 25 09:25 UTC │
	│ stop    │ stopped-upgrade-802711 stop                                                                                                              │ stopped-upgrade-802711    │ jenkins │ v1.32.0 │ 29 Oct 25 09:25 UTC │ 29 Oct 25 09:25 UTC │
	│ start   │ -p stopped-upgrade-802711 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-802711    │ jenkins │ v1.37.0 │ 29 Oct 25 09:25 UTC │ 29 Oct 25 09:25 UTC │
	│ delete  │ -p stopped-upgrade-802711                                                                                                                │ stopped-upgrade-802711    │ jenkins │ v1.37.0 │ 29 Oct 25 09:25 UTC │ 29 Oct 25 09:25 UTC │
	│ start   │ -p running-upgrade-214661 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-214661    │ jenkins │ v1.32.0 │ 29 Oct 25 09:25 UTC │ 29 Oct 25 09:26 UTC │
	│ start   │ -p running-upgrade-214661 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-214661    │ jenkins │ v1.37.0 │ 29 Oct 25 09:26 UTC │ 29 Oct 25 09:26 UTC │
	│ delete  │ -p running-upgrade-214661                                                                                                                │ running-upgrade-214661    │ jenkins │ v1.37.0 │ 29 Oct 25 09:26 UTC │ 29 Oct 25 09:26 UTC │
	│ start   │ -p pause-598473 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-598473              │ jenkins │ v1.37.0 │ 29 Oct 25 09:26 UTC │ 29 Oct 25 09:28 UTC │
	│ start   │ -p pause-598473 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-598473              │ jenkins │ v1.37.0 │ 29 Oct 25 09:28 UTC │ 29 Oct 25 09:28 UTC │
	│ pause   │ -p pause-598473 --alsologtostderr -v=5                                                                                                   │ pause-598473              │ jenkins │ v1.37.0 │ 29 Oct 25 09:28 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:28:18
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:28:18.469406  164763 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:28:18.469594  164763 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:28:18.469626  164763 out.go:374] Setting ErrFile to fd 2...
	I1029 09:28:18.469649  164763 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:28:18.469956  164763 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 09:28:18.470471  164763 out.go:368] Setting JSON to false
	I1029 09:28:18.471467  164763 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4250,"bootTime":1761725848,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1029 09:28:18.471566  164763 start.go:143] virtualization:  
	I1029 09:28:18.475253  164763 out.go:179] * [pause-598473] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1029 09:28:18.478204  164763 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:28:18.478330  164763 notify.go:221] Checking for updates...
	I1029 09:28:18.483960  164763 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:28:18.486935  164763 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:28:18.489841  164763 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	I1029 09:28:18.492846  164763 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1029 09:28:18.495722  164763 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:28:18.499901  164763 config.go:182] Loaded profile config "pause-598473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:28:18.500620  164763 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:28:18.534465  164763 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1029 09:28:18.534576  164763 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:28:18.604040  164763 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-29 09:28:18.594181291 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:28:18.604156  164763 docker.go:319] overlay module found
	I1029 09:28:18.607457  164763 out.go:179] * Using the docker driver based on existing profile
	I1029 09:28:18.610737  164763 start.go:309] selected driver: docker
	I1029 09:28:18.610772  164763 start.go:930] validating driver "docker" against &{Name:pause-598473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-598473 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:28:18.610930  164763 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:28:18.611056  164763 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:28:18.681905  164763 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-29 09:28:18.67218438 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:28:18.682322  164763 cni.go:84] Creating CNI manager for ""
	I1029 09:28:18.682398  164763 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:28:18.682456  164763 start.go:353] cluster config:
	{Name:pause-598473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-598473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:28:18.685779  164763 out.go:179] * Starting "pause-598473" primary control-plane node in "pause-598473" cluster
	I1029 09:28:18.688669  164763 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:28:18.691673  164763 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:28:18.694478  164763 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:28:18.694538  164763 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1029 09:28:18.694549  164763 cache.go:59] Caching tarball of preloaded images
	I1029 09:28:18.694631  164763 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:28:18.694642  164763 preload.go:233] Found /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1029 09:28:18.694652  164763 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 09:28:18.694790  164763 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/pause-598473/config.json ...
	I1029 09:28:18.716584  164763 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:28:18.716607  164763 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:28:18.716626  164763 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:28:18.716649  164763 start.go:360] acquireMachinesLock for pause-598473: {Name:mk72356e6ecc3129f08abe6e7883c069226381fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:28:18.716720  164763 start.go:364] duration metric: took 44.693µs to acquireMachinesLock for "pause-598473"
	I1029 09:28:18.716741  164763 start.go:96] Skipping create...Using existing machine configuration
	I1029 09:28:18.716746  164763 fix.go:54] fixHost starting: 
	I1029 09:28:18.716998  164763 cli_runner.go:164] Run: docker container inspect pause-598473 --format={{.State.Status}}
	I1029 09:28:18.733561  164763 fix.go:112] recreateIfNeeded on pause-598473: state=Running err=<nil>
	W1029 09:28:18.733591  164763 fix.go:138] unexpected machine state, will restart: <nil>
	I1029 09:28:18.736758  164763 out.go:252] * Updating the running docker "pause-598473" container ...
	I1029 09:28:18.736793  164763 machine.go:94] provisionDockerMachine start ...
	I1029 09:28:18.736871  164763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598473
	I1029 09:28:18.762143  164763 main.go:143] libmachine: Using SSH client type: native
	I1029 09:28:18.762471  164763 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33018 <nil> <nil>}
	I1029 09:28:18.762487  164763 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 09:28:18.912085  164763 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-598473
	
	I1029 09:28:18.912111  164763 ubuntu.go:182] provisioning hostname "pause-598473"
	I1029 09:28:18.912215  164763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598473
	I1029 09:28:18.931105  164763 main.go:143] libmachine: Using SSH client type: native
	I1029 09:28:18.931418  164763 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33018 <nil> <nil>}
	I1029 09:28:18.931434  164763 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-598473 && echo "pause-598473" | sudo tee /etc/hostname
	I1029 09:28:19.093958  164763 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-598473
	
	I1029 09:28:19.094039  164763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598473
	I1029 09:28:19.111937  164763 main.go:143] libmachine: Using SSH client type: native
	I1029 09:28:19.112234  164763 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33018 <nil> <nil>}
	I1029 09:28:19.112258  164763 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-598473' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-598473/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-598473' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 09:28:19.265753  164763 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 09:28:19.265780  164763 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-2763/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-2763/.minikube}
	I1029 09:28:19.265836  164763 ubuntu.go:190] setting up certificates
	I1029 09:28:19.265854  164763 provision.go:84] configureAuth start
	I1029 09:28:19.265935  164763 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-598473
	I1029 09:28:19.287106  164763 provision.go:143] copyHostCerts
	I1029 09:28:19.287177  164763 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem, removing ...
	I1029 09:28:19.287196  164763 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 09:28:19.287271  164763 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem (1082 bytes)
	I1029 09:28:19.287384  164763 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem, removing ...
	I1029 09:28:19.287395  164763 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 09:28:19.287424  164763 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem (1123 bytes)
	I1029 09:28:19.287532  164763 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem, removing ...
	I1029 09:28:19.287544  164763 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 09:28:19.287572  164763 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem (1679 bytes)
	I1029 09:28:19.287635  164763 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem org=jenkins.pause-598473 san=[127.0.0.1 192.168.85.2 localhost minikube pause-598473]
	I1029 09:28:19.810962  164763 provision.go:177] copyRemoteCerts
	I1029 09:28:19.811028  164763 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 09:28:19.811067  164763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598473
	I1029 09:28:19.833484  164763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/pause-598473/id_rsa Username:docker}
	I1029 09:28:19.940485  164763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 09:28:19.959265  164763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1029 09:28:19.977772  164763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 09:28:19.995613  164763 provision.go:87] duration metric: took 729.724891ms to configureAuth
	I1029 09:28:19.995682  164763 ubuntu.go:206] setting minikube options for container-runtime
	I1029 09:28:19.995915  164763 config.go:182] Loaded profile config "pause-598473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:28:19.996038  164763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598473
	I1029 09:28:20.019428  164763 main.go:143] libmachine: Using SSH client type: native
	I1029 09:28:20.019748  164763 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33018 <nil> <nil>}
	I1029 09:28:20.019772  164763 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 09:28:20.969584  148690 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.077229735s)
	W1029 09:28:20.969618  148690 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1029 09:28:20.969626  148690 logs.go:123] Gathering logs for kube-apiserver [2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4] ...
	I1029 09:28:20.969638  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4"
	I1029 09:28:21.016043  148690 logs.go:123] Gathering logs for kube-scheduler [a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3] ...
	I1029 09:28:21.016074  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3"
	I1029 09:28:21.080676  148690 logs.go:123] Gathering logs for CRI-O ...
	I1029 09:28:21.080711  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1029 09:28:21.144444  148690 logs.go:123] Gathering logs for container status ...
	I1029 09:28:21.144480  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1029 09:28:21.177867  148690 logs.go:123] Gathering logs for dmesg ...
	I1029 09:28:21.177898  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1029 09:28:21.192747  148690 logs.go:123] Gathering logs for kube-apiserver [bb0061ed47eff52a616c7b3b6a8b792cefd4ee02f4b8ac6d642a481865ce425e] ...
	I1029 09:28:21.192778  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb0061ed47eff52a616c7b3b6a8b792cefd4ee02f4b8ac6d642a481865ce425e"
	I1029 09:28:21.228562  148690 logs.go:123] Gathering logs for kube-controller-manager [317bb4a9c66839659b5364da8d10f652daeda390db9cc195088eab22768b5e94] ...
	I1029 09:28:21.228592  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 317bb4a9c66839659b5364da8d10f652daeda390db9cc195088eab22768b5e94"
	I1029 09:28:23.758364  148690 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1029 09:28:24.985032  148690 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:40784->192.168.76.2:8443: read: connection reset by peer
	I1029 09:28:24.985085  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1029 09:28:24.985147  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1029 09:28:25.020076  148690 cri.go:89] found id: "2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4"
	I1029 09:28:25.020097  148690 cri.go:89] found id: "bb0061ed47eff52a616c7b3b6a8b792cefd4ee02f4b8ac6d642a481865ce425e"
	I1029 09:28:25.020101  148690 cri.go:89] found id: ""
	I1029 09:28:25.020109  148690 logs.go:282] 2 containers: [2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4 bb0061ed47eff52a616c7b3b6a8b792cefd4ee02f4b8ac6d642a481865ce425e]
	I1029 09:28:25.020168  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:25.024264  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:25.028073  148690 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1029 09:28:25.028149  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1029 09:28:25.053676  148690 cri.go:89] found id: ""
	I1029 09:28:25.053699  148690 logs.go:282] 0 containers: []
	W1029 09:28:25.053707  148690 logs.go:284] No container was found matching "etcd"
	I1029 09:28:25.053713  148690 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1029 09:28:25.053769  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1029 09:28:25.081403  148690 cri.go:89] found id: ""
	I1029 09:28:25.081427  148690 logs.go:282] 0 containers: []
	W1029 09:28:25.081435  148690 logs.go:284] No container was found matching "coredns"
	I1029 09:28:25.081442  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1029 09:28:25.081496  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1029 09:28:25.108974  148690 cri.go:89] found id: "a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3"
	I1029 09:28:25.108997  148690 cri.go:89] found id: ""
	I1029 09:28:25.109005  148690 logs.go:282] 1 containers: [a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3]
	I1029 09:28:25.109059  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:25.112772  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1029 09:28:25.112844  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1029 09:28:25.142997  148690 cri.go:89] found id: ""
	I1029 09:28:25.143022  148690 logs.go:282] 0 containers: []
	W1029 09:28:25.143031  148690 logs.go:284] No container was found matching "kube-proxy"
	I1029 09:28:25.143039  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1029 09:28:25.143096  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1029 09:28:25.169963  148690 cri.go:89] found id: "431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa"
	I1029 09:28:25.169984  148690 cri.go:89] found id: "317bb4a9c66839659b5364da8d10f652daeda390db9cc195088eab22768b5e94"
	I1029 09:28:25.169989  148690 cri.go:89] found id: ""
	I1029 09:28:25.169996  148690 logs.go:282] 2 containers: [431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa 317bb4a9c66839659b5364da8d10f652daeda390db9cc195088eab22768b5e94]
	I1029 09:28:25.170049  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:25.173948  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:25.177606  148690 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1029 09:28:25.177675  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1029 09:28:25.217949  148690 cri.go:89] found id: ""
	I1029 09:28:25.217974  148690 logs.go:282] 0 containers: []
	W1029 09:28:25.217983  148690 logs.go:284] No container was found matching "kindnet"
	I1029 09:28:25.217990  148690 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1029 09:28:25.218046  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1029 09:28:25.251847  148690 cri.go:89] found id: ""
	I1029 09:28:25.251872  148690 logs.go:282] 0 containers: []
	W1029 09:28:25.251881  148690 logs.go:284] No container was found matching "storage-provisioner"
	I1029 09:28:25.251894  148690 logs.go:123] Gathering logs for kubelet ...
	I1029 09:28:25.251905  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1029 09:28:25.384105  148690 logs.go:123] Gathering logs for describe nodes ...
	I1029 09:28:25.384178  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1029 09:28:25.414243  164763 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 09:28:25.414263  164763 machine.go:97] duration metric: took 6.677461773s to provisionDockerMachine
	I1029 09:28:25.414274  164763 start.go:293] postStartSetup for "pause-598473" (driver="docker")
	I1029 09:28:25.414284  164763 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 09:28:25.414340  164763 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 09:28:25.414379  164763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598473
	I1029 09:28:25.436163  164763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/pause-598473/id_rsa Username:docker}
	I1029 09:28:25.541571  164763 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 09:28:25.546255  164763 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 09:28:25.546285  164763 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 09:28:25.546296  164763 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/addons for local assets ...
	I1029 09:28:25.546348  164763 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/files for local assets ...
	I1029 09:28:25.546437  164763 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> 45502.pem in /etc/ssl/certs
	I1029 09:28:25.546547  164763 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 09:28:25.556505  164763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /etc/ssl/certs/45502.pem (1708 bytes)
	I1029 09:28:25.580087  164763 start.go:296] duration metric: took 165.798591ms for postStartSetup
	I1029 09:28:25.580184  164763 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 09:28:25.580242  164763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598473
	I1029 09:28:25.599370  164763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/pause-598473/id_rsa Username:docker}
	I1029 09:28:25.714877  164763 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 09:28:25.726292  164763 fix.go:56] duration metric: took 7.009538286s for fixHost
	I1029 09:28:25.726331  164763 start.go:83] releasing machines lock for "pause-598473", held for 7.009590274s
	I1029 09:28:25.726410  164763 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-598473
	I1029 09:28:25.749263  164763 ssh_runner.go:195] Run: cat /version.json
	I1029 09:28:25.749330  164763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598473
	I1029 09:28:25.749692  164763 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 09:28:25.749760  164763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598473
	I1029 09:28:25.778051  164763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/pause-598473/id_rsa Username:docker}
	I1029 09:28:25.791540  164763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/pause-598473/id_rsa Username:docker}
	I1029 09:28:25.991089  164763 ssh_runner.go:195] Run: systemctl --version
	I1029 09:28:25.997639  164763 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 09:28:26.044171  164763 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 09:28:26.049499  164763 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 09:28:26.049634  164763 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 09:28:26.057619  164763 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1029 09:28:26.057643  164763 start.go:496] detecting cgroup driver to use...
	I1029 09:28:26.057696  164763 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1029 09:28:26.057753  164763 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 09:28:26.073500  164763 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 09:28:26.086740  164763 docker.go:218] disabling cri-docker service (if available) ...
	I1029 09:28:26.086801  164763 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 09:28:26.102047  164763 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 09:28:26.115294  164763 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 09:28:26.253869  164763 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 09:28:26.392991  164763 docker.go:234] disabling docker service ...
	I1029 09:28:26.393141  164763 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 09:28:26.408202  164763 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 09:28:26.421559  164763 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 09:28:26.557979  164763 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 09:28:26.701796  164763 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 09:28:26.715144  164763 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 09:28:26.729791  164763 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 09:28:26.729854  164763 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:28:26.738691  164763 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1029 09:28:26.738763  164763 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:28:26.747936  164763 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:28:26.757557  164763 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:28:26.766266  164763 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 09:28:26.774488  164763 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:28:26.783641  164763 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:28:26.791734  164763 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:28:26.800528  164763 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 09:28:26.808090  164763 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 09:28:26.815459  164763 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:28:26.955546  164763 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 09:28:27.302102  164763 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 09:28:27.302216  164763 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 09:28:27.306116  164763 start.go:564] Will wait 60s for crictl version
	I1029 09:28:27.306238  164763 ssh_runner.go:195] Run: which crictl
	I1029 09:28:27.309754  164763 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 09:28:27.332672  164763 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 09:28:27.332840  164763 ssh_runner.go:195] Run: crio --version
	I1029 09:28:27.376425  164763 ssh_runner.go:195] Run: crio --version
	I1029 09:28:27.430706  164763 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 09:28:27.434998  164763 cli_runner.go:164] Run: docker network inspect pause-598473 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:28:27.460673  164763 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1029 09:28:27.465949  164763 kubeadm.go:884] updating cluster {Name:pause-598473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-598473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 09:28:27.466108  164763 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:28:27.466159  164763 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:28:27.541464  164763 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:28:27.541484  164763 crio.go:433] Images already preloaded, skipping extraction
	I1029 09:28:27.541545  164763 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:28:27.621617  164763 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:28:27.621686  164763 cache_images.go:86] Images are preloaded, skipping loading
	I1029 09:28:27.621716  164763 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1029 09:28:27.621861  164763 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-598473 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-598473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 09:28:27.621981  164763 ssh_runner.go:195] Run: crio config
	I1029 09:28:27.774223  164763 cni.go:84] Creating CNI manager for ""
	I1029 09:28:27.774295  164763 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:28:27.774334  164763 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1029 09:28:27.774391  164763 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-598473 NodeName:pause-598473 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 09:28:27.774582  164763 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-598473"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 09:28:27.774693  164763 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 09:28:27.788763  164763 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 09:28:27.788914  164763 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 09:28:27.801613  164763 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1029 09:28:27.826703  164763 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 09:28:27.848062  164763 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1029 09:28:27.869428  164763 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1029 09:28:27.873508  164763 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:28:28.169026  164763 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:28:28.187148  164763 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/pause-598473 for IP: 192.168.85.2
	I1029 09:28:28.187216  164763 certs.go:195] generating shared ca certs ...
	I1029 09:28:28.187248  164763 certs.go:227] acquiring lock for ca certs: {Name:mk715611293338c39436fe9072c5ce0c2fb993a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:28:28.187442  164763 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key
	I1029 09:28:28.187536  164763 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key
	I1029 09:28:28.187564  164763 certs.go:257] generating profile certs ...
	I1029 09:28:28.187707  164763 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/pause-598473/client.key
	I1029 09:28:28.187841  164763 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/pause-598473/apiserver.key.62d36ef7
	I1029 09:28:28.188186  164763 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/pause-598473/proxy-client.key
	I1029 09:28:28.196091  164763 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem (1338 bytes)
	W1029 09:28:28.196195  164763 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550_empty.pem, impossibly tiny 0 bytes
	I1029 09:28:28.196238  164763 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem (1679 bytes)
	I1029 09:28:28.196292  164763 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem (1082 bytes)
	I1029 09:28:28.196372  164763 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem (1123 bytes)
	I1029 09:28:28.196436  164763 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem (1679 bytes)
	I1029 09:28:28.196525  164763 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem (1708 bytes)
	I1029 09:28:28.197206  164763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 09:28:28.247854  164763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 09:28:28.289299  164763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 09:28:28.321465  164763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1029 09:28:28.350248  164763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/pause-598473/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1029 09:28:28.400483  164763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/pause-598473/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 09:28:28.488096  164763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/pause-598473/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 09:28:28.535751  164763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/pause-598473/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1029 09:28:28.581395  164763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem --> /usr/share/ca-certificates/4550.pem (1338 bytes)
	I1029 09:28:28.622202  164763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /usr/share/ca-certificates/45502.pem (1708 bytes)
	I1029 09:28:28.659111  164763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 09:28:28.712129  164763 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 09:28:28.750632  164763 ssh_runner.go:195] Run: openssl version
	I1029 09:28:28.775443  164763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4550.pem && ln -fs /usr/share/ca-certificates/4550.pem /etc/ssl/certs/4550.pem"
	I1029 09:28:28.794725  164763 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4550.pem
	I1029 09:28:28.799652  164763 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:27 /usr/share/ca-certificates/4550.pem
	I1029 09:28:28.799791  164763 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4550.pem
	I1029 09:28:28.883091  164763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4550.pem /etc/ssl/certs/51391683.0"
	I1029 09:28:28.912160  164763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/45502.pem && ln -fs /usr/share/ca-certificates/45502.pem /etc/ssl/certs/45502.pem"
	I1029 09:28:28.923994  164763 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/45502.pem
	I1029 09:28:28.931938  164763 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:27 /usr/share/ca-certificates/45502.pem
	I1029 09:28:28.931999  164763 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/45502.pem
	I1029 09:28:29.034360  164763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/45502.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 09:28:29.061910  164763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 09:28:29.086916  164763 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:28:29.093249  164763 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:28:29.093313  164763 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:28:29.207119  164763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 09:28:29.220277  164763 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 09:28:29.228408  164763 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1029 09:28:29.407590  164763 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1029 09:28:29.528752  164763 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1029 09:28:29.577884  164763 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1029 09:28:29.621166  164763 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1029 09:28:29.662729  164763 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1029 09:28:29.713150  164763 kubeadm.go:401] StartCluster: {Name:pause-598473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-598473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:28:29.713364  164763 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 09:28:29.713461  164763 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 09:28:29.752792  164763 cri.go:89] found id: "7dda4d8e9247e4c48e0541c08b16da24318bbccc701f139085f1241779fd5f7c"
	I1029 09:28:29.752868  164763 cri.go:89] found id: "a40e8b1aec6cf4ffb89caaf694a74a0457d97ce61bc68f6103e3910054c0228b"
	I1029 09:28:29.752889  164763 cri.go:89] found id: "99ffc1b8e15dbc056c0325ee48e8afa68322220323c386b6bedb1c4a2ee5e455"
	I1029 09:28:29.752912  164763 cri.go:89] found id: "e240f2f193b4ad7983ce46038c61646263f3c3252a816cfb9eb501adbc10637f"
	I1029 09:28:29.752946  164763 cri.go:89] found id: "b54a7a1da46f4878031777e1d18042b4b4bba0e73a5204cb18e65a98dfe4bf56"
	I1029 09:28:29.752971  164763 cri.go:89] found id: "3f24d3b0d17159e35ec8ac73b72ecde2d13c87c4ce788a4d8aece1755628f8b4"
	I1029 09:28:29.752993  164763 cri.go:89] found id: "7257e194f3686f6d742fd1cd0d89139b8bd26bf067856ef661f029216e99b096"
	I1029 09:28:29.753025  164763 cri.go:89] found id: "d384a6fc7e5d0182de7245d870f2c33ac8483358e6f6ac6db5e18ba13fa7d9d8"
	I1029 09:28:29.753046  164763 cri.go:89] found id: "c190688eeeb79e9c923c6ec33de1858543704894afd50ecdb214f8e4111e298c"
	I1029 09:28:29.753072  164763 cri.go:89] found id: "8747eed7a27641339a70bdff96979ff32978a82c63e891fbc1950d2e489f7e1c"
	I1029 09:28:29.753108  164763 cri.go:89] found id: "ca16a1729c7691f2ea4057d58e8323e20627757b080269568c8ba95cd450fa92"
	I1029 09:28:29.753130  164763 cri.go:89] found id: "8027a2710b597803179f3d65c316ab838f3c511999b97880ec0b1f59441db3cd"
	I1029 09:28:29.753152  164763 cri.go:89] found id: "331659622ea96eb65f7a270e3e1d8f8fa9f2d2eddfd4e3e8bba99a26abb753dd"
	I1029 09:28:29.753188  164763 cri.go:89] found id: "2413d471a2a4209e069fb08050610258c0805f09213c5cf465ffa1c188508fa8"
	I1029 09:28:29.753211  164763 cri.go:89] found id: ""
	I1029 09:28:29.753294  164763 ssh_runner.go:195] Run: sudo runc list -f json
	W1029 09:28:29.773622  164763 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:28:29Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:28:29.773757  164763 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 09:28:29.788175  164763 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1029 09:28:29.788251  164763 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1029 09:28:29.788367  164763 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1029 09:28:29.801568  164763 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1029 09:28:29.802372  164763 kubeconfig.go:125] found "pause-598473" server: "https://192.168.85.2:8443"
	I1029 09:28:29.803411  164763 kapi.go:59] client config for pause-598473: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/pause-598473/client.crt", KeyFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/pause-598473/client.key", CAFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1029 09:28:29.804104  164763 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1029 09:28:29.804190  164763 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1029 09:28:29.804214  164763 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1029 09:28:29.804235  164763 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1029 09:28:29.804272  164763 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1029 09:28:29.804715  164763 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1029 09:28:29.817506  164763 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1029 09:28:29.817589  164763 kubeadm.go:602] duration metric: took 29.293291ms to restartPrimaryControlPlane
	I1029 09:28:29.817617  164763 kubeadm.go:403] duration metric: took 104.476114ms to StartCluster
	I1029 09:28:29.817647  164763 settings.go:142] acquiring lock: {Name:mkeba1c0d8e3656c2a05b6b6f1f81184498df216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:28:29.817753  164763 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:28:29.818664  164763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:28:29.818947  164763 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:28:29.819367  164763 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 09:28:29.819729  164763 config.go:182] Loaded profile config "pause-598473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:28:29.823340  164763 out.go:179] * Enabled addons: 
	I1029 09:28:29.823442  164763 out.go:179] * Verifying Kubernetes components...
	W1029 09:28:25.476859  148690 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1029 09:28:25.476876  148690 logs.go:123] Gathering logs for kube-apiserver [bb0061ed47eff52a616c7b3b6a8b792cefd4ee02f4b8ac6d642a481865ce425e] ...
	I1029 09:28:25.476895  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb0061ed47eff52a616c7b3b6a8b792cefd4ee02f4b8ac6d642a481865ce425e"
	I1029 09:28:25.512046  148690 logs.go:123] Gathering logs for kube-controller-manager [431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa] ...
	I1029 09:28:25.512078  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa"
	I1029 09:28:25.544696  148690 logs.go:123] Gathering logs for container status ...
	I1029 09:28:25.544720  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1029 09:28:25.586941  148690 logs.go:123] Gathering logs for dmesg ...
	I1029 09:28:25.586964  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1029 09:28:25.604220  148690 logs.go:123] Gathering logs for kube-apiserver [2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4] ...
	I1029 09:28:25.604303  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4"
	I1029 09:28:25.648623  148690 logs.go:123] Gathering logs for kube-scheduler [a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3] ...
	I1029 09:28:25.649333  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3"
	I1029 09:28:25.717122  148690 logs.go:123] Gathering logs for kube-controller-manager [317bb4a9c66839659b5364da8d10f652daeda390db9cc195088eab22768b5e94] ...
	I1029 09:28:25.717173  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 317bb4a9c66839659b5364da8d10f652daeda390db9cc195088eab22768b5e94"
	I1029 09:28:25.770685  148690 logs.go:123] Gathering logs for CRI-O ...
	I1029 09:28:25.770714  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1029 09:28:28.358170  148690 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1029 09:28:28.358506  148690 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:28:28.358542  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1029 09:28:28.358595  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1029 09:28:28.433046  148690 cri.go:89] found id: "2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4"
	I1029 09:28:28.433064  148690 cri.go:89] found id: ""
	I1029 09:28:28.433071  148690 logs.go:282] 1 containers: [2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4]
	I1029 09:28:28.433121  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:28.436959  148690 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1029 09:28:28.437029  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1029 09:28:28.485188  148690 cri.go:89] found id: ""
	I1029 09:28:28.485209  148690 logs.go:282] 0 containers: []
	W1029 09:28:28.485217  148690 logs.go:284] No container was found matching "etcd"
	I1029 09:28:28.485226  148690 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1029 09:28:28.485283  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1029 09:28:28.541016  148690 cri.go:89] found id: ""
	I1029 09:28:28.541038  148690 logs.go:282] 0 containers: []
	W1029 09:28:28.541046  148690 logs.go:284] No container was found matching "coredns"
	I1029 09:28:28.541053  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1029 09:28:28.541107  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1029 09:28:28.589048  148690 cri.go:89] found id: "a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3"
	I1029 09:28:28.589067  148690 cri.go:89] found id: ""
	I1029 09:28:28.589075  148690 logs.go:282] 1 containers: [a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3]
	I1029 09:28:28.589139  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:28.595246  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1029 09:28:28.595311  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1029 09:28:28.658455  148690 cri.go:89] found id: ""
	I1029 09:28:28.658476  148690 logs.go:282] 0 containers: []
	W1029 09:28:28.658484  148690 logs.go:284] No container was found matching "kube-proxy"
	I1029 09:28:28.658491  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1029 09:28:28.658546  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1029 09:28:28.705155  148690 cri.go:89] found id: "431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa"
	I1029 09:28:28.705174  148690 cri.go:89] found id: "317bb4a9c66839659b5364da8d10f652daeda390db9cc195088eab22768b5e94"
	I1029 09:28:28.705179  148690 cri.go:89] found id: ""
	I1029 09:28:28.705186  148690 logs.go:282] 2 containers: [431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa 317bb4a9c66839659b5364da8d10f652daeda390db9cc195088eab22768b5e94]
	I1029 09:28:28.705240  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:28.709151  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:28.716747  148690 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1029 09:28:28.716820  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1029 09:28:28.782600  148690 cri.go:89] found id: ""
	I1029 09:28:28.782620  148690 logs.go:282] 0 containers: []
	W1029 09:28:28.782629  148690 logs.go:284] No container was found matching "kindnet"
	I1029 09:28:28.782635  148690 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1029 09:28:28.782688  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1029 09:28:28.830553  148690 cri.go:89] found id: ""
	I1029 09:28:28.830575  148690 logs.go:282] 0 containers: []
	W1029 09:28:28.830583  148690 logs.go:284] No container was found matching "storage-provisioner"
	I1029 09:28:28.830597  148690 logs.go:123] Gathering logs for kubelet ...
	I1029 09:28:28.830608  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1029 09:28:28.990311  148690 logs.go:123] Gathering logs for dmesg ...
	I1029 09:28:28.990394  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1029 09:28:29.012850  148690 logs.go:123] Gathering logs for kube-scheduler [a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3] ...
	I1029 09:28:29.013029  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3"
	I1029 09:28:29.122132  148690 logs.go:123] Gathering logs for kube-controller-manager [317bb4a9c66839659b5364da8d10f652daeda390db9cc195088eab22768b5e94] ...
	I1029 09:28:29.122207  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 317bb4a9c66839659b5364da8d10f652daeda390db9cc195088eab22768b5e94"
	I1029 09:28:29.178420  148690 logs.go:123] Gathering logs for describe nodes ...
	I1029 09:28:29.178446  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1029 09:28:29.293168  148690 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1029 09:28:29.293187  148690 logs.go:123] Gathering logs for kube-apiserver [2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4] ...
	I1029 09:28:29.293199  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4"
	I1029 09:28:29.355867  148690 logs.go:123] Gathering logs for kube-controller-manager [431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa] ...
	I1029 09:28:29.355937  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa"
	I1029 09:28:29.391111  148690 logs.go:123] Gathering logs for CRI-O ...
	I1029 09:28:29.391188  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1029 09:28:29.471816  148690 logs.go:123] Gathering logs for container status ...
	I1029 09:28:29.471897  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1029 09:28:29.826140  164763 addons.go:515] duration metric: took 6.76969ms for enable addons: enabled=[]
	I1029 09:28:29.826248  164763 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:28:30.076497  164763 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:28:30.099849  164763 node_ready.go:35] waiting up to 6m0s for node "pause-598473" to be "Ready" ...
	I1029 09:28:32.041229  148690 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1029 09:28:32.041584  148690 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:28:32.041622  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1029 09:28:32.041672  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1029 09:28:32.087119  148690 cri.go:89] found id: "2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4"
	I1029 09:28:32.087138  148690 cri.go:89] found id: ""
	I1029 09:28:32.087146  148690 logs.go:282] 1 containers: [2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4]
	I1029 09:28:32.087201  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:32.093859  148690 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1029 09:28:32.093930  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1029 09:28:32.132217  148690 cri.go:89] found id: ""
	I1029 09:28:32.132237  148690 logs.go:282] 0 containers: []
	W1029 09:28:32.132245  148690 logs.go:284] No container was found matching "etcd"
	I1029 09:28:32.132252  148690 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1029 09:28:32.132332  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1029 09:28:32.180807  148690 cri.go:89] found id: ""
	I1029 09:28:32.180828  148690 logs.go:282] 0 containers: []
	W1029 09:28:32.180836  148690 logs.go:284] No container was found matching "coredns"
	I1029 09:28:32.180842  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1029 09:28:32.180897  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1029 09:28:32.217111  148690 cri.go:89] found id: "a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3"
	I1029 09:28:32.217129  148690 cri.go:89] found id: ""
	I1029 09:28:32.217137  148690 logs.go:282] 1 containers: [a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3]
	I1029 09:28:32.217188  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:32.221475  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1029 09:28:32.221594  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1029 09:28:32.277740  148690 cri.go:89] found id: ""
	I1029 09:28:32.277761  148690 logs.go:282] 0 containers: []
	W1029 09:28:32.277769  148690 logs.go:284] No container was found matching "kube-proxy"
	I1029 09:28:32.277775  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1029 09:28:32.277829  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1029 09:28:32.321267  148690 cri.go:89] found id: "431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa"
	I1029 09:28:32.321286  148690 cri.go:89] found id: "317bb4a9c66839659b5364da8d10f652daeda390db9cc195088eab22768b5e94"
	I1029 09:28:32.321291  148690 cri.go:89] found id: ""
	I1029 09:28:32.321298  148690 logs.go:282] 2 containers: [431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa 317bb4a9c66839659b5364da8d10f652daeda390db9cc195088eab22768b5e94]
	I1029 09:28:32.321352  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:32.328366  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:32.332556  148690 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1029 09:28:32.332782  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1029 09:28:32.386253  148690 cri.go:89] found id: ""
	I1029 09:28:32.386336  148690 logs.go:282] 0 containers: []
	W1029 09:28:32.386360  148690 logs.go:284] No container was found matching "kindnet"
	I1029 09:28:32.386401  148690 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1029 09:28:32.386499  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1029 09:28:32.429648  148690 cri.go:89] found id: ""
	I1029 09:28:32.429670  148690 logs.go:282] 0 containers: []
	W1029 09:28:32.429679  148690 logs.go:284] No container was found matching "storage-provisioner"
	I1029 09:28:32.429693  148690 logs.go:123] Gathering logs for container status ...
	I1029 09:28:32.429704  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1029 09:28:32.502853  148690 logs.go:123] Gathering logs for describe nodes ...
	I1029 09:28:32.502935  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1029 09:28:32.624900  148690 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1029 09:28:32.624958  148690 logs.go:123] Gathering logs for kube-apiserver [2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4] ...
	I1029 09:28:32.624995  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4"
	I1029 09:28:32.702365  148690 logs.go:123] Gathering logs for kube-controller-manager [431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa] ...
	I1029 09:28:32.702439  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa"
	I1029 09:28:32.754053  148690 logs.go:123] Gathering logs for kube-controller-manager [317bb4a9c66839659b5364da8d10f652daeda390db9cc195088eab22768b5e94] ...
	I1029 09:28:32.754079  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 317bb4a9c66839659b5364da8d10f652daeda390db9cc195088eab22768b5e94"
	I1029 09:28:32.814231  148690 logs.go:123] Gathering logs for kubelet ...
	I1029 09:28:32.814256  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1029 09:28:32.958252  148690 logs.go:123] Gathering logs for dmesg ...
	I1029 09:28:32.958329  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1029 09:28:32.975510  148690 logs.go:123] Gathering logs for kube-scheduler [a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3] ...
	I1029 09:28:32.975651  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3"
	I1029 09:28:33.056645  148690 logs.go:123] Gathering logs for CRI-O ...
	I1029 09:28:33.056720  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1029 09:28:33.829807  164763 node_ready.go:49] node "pause-598473" is "Ready"
	I1029 09:28:33.829851  164763 node_ready.go:38] duration metric: took 3.729906134s for node "pause-598473" to be "Ready" ...
	I1029 09:28:33.829865  164763 api_server.go:52] waiting for apiserver process to appear ...
	I1029 09:28:33.829927  164763 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:28:33.849963  164763 api_server.go:72] duration metric: took 4.030953784s to wait for apiserver process to appear ...
	I1029 09:28:33.849985  164763 api_server.go:88] waiting for apiserver healthz status ...
	I1029 09:28:33.850005  164763 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:28:33.923972  164763 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 09:28:33.924043  164763 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1029 09:28:34.350261  164763 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:28:34.364845  164763 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 09:28:34.364913  164763 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1029 09:28:34.850218  164763 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:28:34.858184  164763 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1029 09:28:34.859218  164763 api_server.go:141] control plane version: v1.34.1
	I1029 09:28:34.859242  164763 api_server.go:131] duration metric: took 1.0092501s to wait for apiserver health ...
	I1029 09:28:34.859250  164763 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 09:28:34.862663  164763 system_pods.go:59] 7 kube-system pods found
	I1029 09:28:34.862701  164763 system_pods.go:61] "coredns-66bc5c9577-tkwf6" [8d843afb-d055-43fc-92e1-8816da3ab88b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:28:34.862711  164763 system_pods.go:61] "etcd-pause-598473" [c0e187a0-e38a-44e0-b57c-abc11d5e4c6b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:28:34.862719  164763 system_pods.go:61] "kindnet-g6xj4" [73a37546-9547-4ab6-a47d-2ba7197a11f5] Running
	I1029 09:28:34.862726  164763 system_pods.go:61] "kube-apiserver-pause-598473" [6a93240c-59bc-46b5-9b69-af188f338ea5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:28:34.862733  164763 system_pods.go:61] "kube-controller-manager-pause-598473" [f993caa8-139d-41a6-800c-7f0e16805c9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:28:34.862737  164763 system_pods.go:61] "kube-proxy-tjggg" [d87db520-c253-4583-9374-28fcc707d1dd] Running
	I1029 09:28:34.862746  164763 system_pods.go:61] "kube-scheduler-pause-598473" [183456c1-44f2-4a58-ba59-285f59ed7268] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:28:34.862759  164763 system_pods.go:74] duration metric: took 3.500809ms to wait for pod list to return data ...
	I1029 09:28:34.862768  164763 default_sa.go:34] waiting for default service account to be created ...
	I1029 09:28:34.864653  164763 default_sa.go:45] found service account: "default"
	I1029 09:28:34.864677  164763 default_sa.go:55] duration metric: took 1.898582ms for default service account to be created ...
	I1029 09:28:34.864689  164763 system_pods.go:116] waiting for k8s-apps to be running ...
	I1029 09:28:34.867300  164763 system_pods.go:86] 7 kube-system pods found
	I1029 09:28:34.867333  164763 system_pods.go:89] "coredns-66bc5c9577-tkwf6" [8d843afb-d055-43fc-92e1-8816da3ab88b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:28:34.867342  164763 system_pods.go:89] "etcd-pause-598473" [c0e187a0-e38a-44e0-b57c-abc11d5e4c6b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:28:34.867348  164763 system_pods.go:89] "kindnet-g6xj4" [73a37546-9547-4ab6-a47d-2ba7197a11f5] Running
	I1029 09:28:34.867383  164763 system_pods.go:89] "kube-apiserver-pause-598473" [6a93240c-59bc-46b5-9b69-af188f338ea5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:28:34.867398  164763 system_pods.go:89] "kube-controller-manager-pause-598473" [f993caa8-139d-41a6-800c-7f0e16805c9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:28:34.867416  164763 system_pods.go:89] "kube-proxy-tjggg" [d87db520-c253-4583-9374-28fcc707d1dd] Running
	I1029 09:28:34.867423  164763 system_pods.go:89] "kube-scheduler-pause-598473" [183456c1-44f2-4a58-ba59-285f59ed7268] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:28:34.867431  164763 system_pods.go:126] duration metric: took 2.735929ms to wait for k8s-apps to be running ...
	I1029 09:28:34.867459  164763 system_svc.go:44] waiting for kubelet service to be running ....
	I1029 09:28:34.867531  164763 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:28:34.881634  164763 system_svc.go:56] duration metric: took 14.182906ms WaitForService to wait for kubelet
	I1029 09:28:34.881665  164763 kubeadm.go:587] duration metric: took 5.062659688s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:28:34.881692  164763 node_conditions.go:102] verifying NodePressure condition ...
	I1029 09:28:34.884098  164763 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1029 09:28:34.884129  164763 node_conditions.go:123] node cpu capacity is 2
	I1029 09:28:34.884141  164763 node_conditions.go:105] duration metric: took 2.443784ms to run NodePressure ...
	I1029 09:28:34.884153  164763 start.go:242] waiting for startup goroutines ...
	I1029 09:28:34.884161  164763 start.go:247] waiting for cluster config update ...
	I1029 09:28:34.884169  164763 start.go:256] writing updated cluster config ...
	I1029 09:28:34.884572  164763 ssh_runner.go:195] Run: rm -f paused
	I1029 09:28:34.888335  164763 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:28:34.888954  164763 kapi.go:59] client config for pause-598473: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/pause-598473/client.crt", KeyFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/profiles/pause-598473/client.key", CAFile:"/home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1029 09:28:34.891930  164763 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tkwf6" in "kube-system" namespace to be "Ready" or be gone ...
	W1029 09:28:36.896813  164763 pod_ready.go:104] pod "coredns-66bc5c9577-tkwf6" is not "Ready", error: <nil>
	I1029 09:28:35.627579  148690 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1029 09:28:35.628008  148690 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:28:35.628061  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1029 09:28:35.628119  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1029 09:28:35.657268  148690 cri.go:89] found id: "2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4"
	I1029 09:28:35.657289  148690 cri.go:89] found id: ""
	I1029 09:28:35.657297  148690 logs.go:282] 1 containers: [2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4]
	I1029 09:28:35.657381  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:35.661143  148690 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1029 09:28:35.661220  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1029 09:28:35.685798  148690 cri.go:89] found id: ""
	I1029 09:28:35.685822  148690 logs.go:282] 0 containers: []
	W1029 09:28:35.685831  148690 logs.go:284] No container was found matching "etcd"
	I1029 09:28:35.685838  148690 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1029 09:28:35.685892  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1029 09:28:35.711480  148690 cri.go:89] found id: ""
	I1029 09:28:35.711513  148690 logs.go:282] 0 containers: []
	W1029 09:28:35.711522  148690 logs.go:284] No container was found matching "coredns"
	I1029 09:28:35.711531  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1029 09:28:35.711588  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1029 09:28:35.743827  148690 cri.go:89] found id: "a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3"
	I1029 09:28:35.743854  148690 cri.go:89] found id: ""
	I1029 09:28:35.743870  148690 logs.go:282] 1 containers: [a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3]
	I1029 09:28:35.743922  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:35.748108  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1029 09:28:35.748179  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1029 09:28:35.776972  148690 cri.go:89] found id: ""
	I1029 09:28:35.776997  148690 logs.go:282] 0 containers: []
	W1029 09:28:35.777006  148690 logs.go:284] No container was found matching "kube-proxy"
	I1029 09:28:35.777013  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1029 09:28:35.777070  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1029 09:28:35.814721  148690 cri.go:89] found id: "431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa"
	I1029 09:28:35.814743  148690 cri.go:89] found id: ""
	I1029 09:28:35.814753  148690 logs.go:282] 1 containers: [431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa]
	I1029 09:28:35.814809  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:35.818665  148690 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1029 09:28:35.818738  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1029 09:28:35.848104  148690 cri.go:89] found id: ""
	I1029 09:28:35.848134  148690 logs.go:282] 0 containers: []
	W1029 09:28:35.848148  148690 logs.go:284] No container was found matching "kindnet"
	I1029 09:28:35.848155  148690 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1029 09:28:35.848238  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1029 09:28:35.876542  148690 cri.go:89] found id: ""
	I1029 09:28:35.876566  148690 logs.go:282] 0 containers: []
	W1029 09:28:35.876575  148690 logs.go:284] No container was found matching "storage-provisioner"
	I1029 09:28:35.876584  148690 logs.go:123] Gathering logs for kube-apiserver [2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4] ...
	I1029 09:28:35.876600  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4"
	I1029 09:28:35.918708  148690 logs.go:123] Gathering logs for kube-scheduler [a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3] ...
	I1029 09:28:35.918738  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3"
	I1029 09:28:35.986320  148690 logs.go:123] Gathering logs for kube-controller-manager [431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa] ...
	I1029 09:28:35.986355  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa"
	I1029 09:28:36.033628  148690 logs.go:123] Gathering logs for CRI-O ...
	I1029 09:28:36.033708  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1029 09:28:36.129568  148690 logs.go:123] Gathering logs for container status ...
	I1029 09:28:36.129646  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1029 09:28:36.177856  148690 logs.go:123] Gathering logs for kubelet ...
	I1029 09:28:36.177928  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1029 09:28:36.331019  148690 logs.go:123] Gathering logs for dmesg ...
	I1029 09:28:36.331053  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1029 09:28:36.349162  148690 logs.go:123] Gathering logs for describe nodes ...
	I1029 09:28:36.349198  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1029 09:28:36.431707  148690 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1029 09:28:38.933165  148690 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1029 09:28:38.933549  148690 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:28:38.933633  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1029 09:28:38.933713  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1029 09:28:38.963867  148690 cri.go:89] found id: "2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4"
	I1029 09:28:38.963899  148690 cri.go:89] found id: ""
	I1029 09:28:38.963908  148690 logs.go:282] 1 containers: [2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4]
	I1029 09:28:38.963964  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:38.968350  148690 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1029 09:28:38.968423  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1029 09:28:38.999073  148690 cri.go:89] found id: ""
	I1029 09:28:38.999098  148690 logs.go:282] 0 containers: []
	W1029 09:28:38.999106  148690 logs.go:284] No container was found matching "etcd"
	I1029 09:28:38.999113  148690 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1029 09:28:38.999195  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1029 09:28:39.029402  148690 cri.go:89] found id: ""
	I1029 09:28:39.029425  148690 logs.go:282] 0 containers: []
	W1029 09:28:39.029434  148690 logs.go:284] No container was found matching "coredns"
	I1029 09:28:39.029441  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1029 09:28:39.029498  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1029 09:28:39.055884  148690 cri.go:89] found id: "a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3"
	I1029 09:28:39.055904  148690 cri.go:89] found id: ""
	I1029 09:28:39.055912  148690 logs.go:282] 1 containers: [a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3]
	I1029 09:28:39.055975  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:39.059869  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1029 09:28:39.059980  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1029 09:28:39.087338  148690 cri.go:89] found id: ""
	I1029 09:28:39.087401  148690 logs.go:282] 0 containers: []
	W1029 09:28:39.087424  148690 logs.go:284] No container was found matching "kube-proxy"
	I1029 09:28:39.087450  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1029 09:28:39.087528  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1029 09:28:39.113095  148690 cri.go:89] found id: "431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa"
	I1029 09:28:39.113116  148690 cri.go:89] found id: ""
	I1029 09:28:39.113124  148690 logs.go:282] 1 containers: [431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa]
	I1029 09:28:39.113205  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:39.117033  148690 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1029 09:28:39.117205  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1029 09:28:39.150763  148690 cri.go:89] found id: ""
	I1029 09:28:39.150787  148690 logs.go:282] 0 containers: []
	W1029 09:28:39.150795  148690 logs.go:284] No container was found matching "kindnet"
	I1029 09:28:39.150802  148690 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1029 09:28:39.150857  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1029 09:28:39.189925  148690 cri.go:89] found id: ""
	I1029 09:28:39.189955  148690 logs.go:282] 0 containers: []
	W1029 09:28:39.189976  148690 logs.go:284] No container was found matching "storage-provisioner"
	I1029 09:28:39.189986  148690 logs.go:123] Gathering logs for kube-apiserver [2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4] ...
	I1029 09:28:39.190004  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4"
	I1029 09:28:39.224620  148690 logs.go:123] Gathering logs for kube-scheduler [a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3] ...
	I1029 09:28:39.224649  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3"
	I1029 09:28:39.310379  148690 logs.go:123] Gathering logs for kube-controller-manager [431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa] ...
	I1029 09:28:39.310415  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa"
	I1029 09:28:39.339506  148690 logs.go:123] Gathering logs for CRI-O ...
	I1029 09:28:39.339536  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1029 09:28:39.407883  148690 logs.go:123] Gathering logs for container status ...
	I1029 09:28:39.407922  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1029 09:28:39.441986  148690 logs.go:123] Gathering logs for kubelet ...
	I1029 09:28:39.442016  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1029 09:28:39.584232  148690 logs.go:123] Gathering logs for dmesg ...
	I1029 09:28:39.584323  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1029 09:28:39.603301  148690 logs.go:123] Gathering logs for describe nodes ...
	I1029 09:28:39.603383  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1029 09:28:39.688150  148690 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1029 09:28:38.896961  164763 pod_ready.go:104] pod "coredns-66bc5c9577-tkwf6" is not "Ready", error: <nil>
	I1029 09:28:39.897522  164763 pod_ready.go:94] pod "coredns-66bc5c9577-tkwf6" is "Ready"
	I1029 09:28:39.897552  164763 pod_ready.go:86] duration metric: took 5.005553414s for pod "coredns-66bc5c9577-tkwf6" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:28:39.900077  164763 pod_ready.go:83] waiting for pod "etcd-pause-598473" in "kube-system" namespace to be "Ready" or be gone ...
	W1029 09:28:41.906276  164763 pod_ready.go:104] pod "etcd-pause-598473" is not "Ready", error: <nil>
	I1029 09:28:42.189673  148690 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1029 09:28:42.190191  148690 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:28:42.190250  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1029 09:28:42.190358  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1029 09:28:42.230916  148690 cri.go:89] found id: "2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4"
	I1029 09:28:42.230991  148690 cri.go:89] found id: ""
	I1029 09:28:42.231006  148690 logs.go:282] 1 containers: [2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4]
	I1029 09:28:42.231070  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:42.235840  148690 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1029 09:28:42.235921  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1029 09:28:42.266560  148690 cri.go:89] found id: ""
	I1029 09:28:42.266582  148690 logs.go:282] 0 containers: []
	W1029 09:28:42.266590  148690 logs.go:284] No container was found matching "etcd"
	I1029 09:28:42.266597  148690 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1029 09:28:42.266661  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1029 09:28:42.302224  148690 cri.go:89] found id: ""
	I1029 09:28:42.302253  148690 logs.go:282] 0 containers: []
	W1029 09:28:42.302262  148690 logs.go:284] No container was found matching "coredns"
	I1029 09:28:42.302269  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1029 09:28:42.302329  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1029 09:28:42.330749  148690 cri.go:89] found id: "a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3"
	I1029 09:28:42.330773  148690 cri.go:89] found id: ""
	I1029 09:28:42.330781  148690 logs.go:282] 1 containers: [a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3]
	I1029 09:28:42.330834  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:42.334842  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1029 09:28:42.334924  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1029 09:28:42.362705  148690 cri.go:89] found id: ""
	I1029 09:28:42.362729  148690 logs.go:282] 0 containers: []
	W1029 09:28:42.362737  148690 logs.go:284] No container was found matching "kube-proxy"
	I1029 09:28:42.362745  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1029 09:28:42.362821  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1029 09:28:42.390868  148690 cri.go:89] found id: "431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa"
	I1029 09:28:42.390892  148690 cri.go:89] found id: ""
	I1029 09:28:42.390900  148690 logs.go:282] 1 containers: [431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa]
	I1029 09:28:42.390977  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:42.394914  148690 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1029 09:28:42.395028  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1029 09:28:42.435556  148690 cri.go:89] found id: ""
	I1029 09:28:42.435581  148690 logs.go:282] 0 containers: []
	W1029 09:28:42.435590  148690 logs.go:284] No container was found matching "kindnet"
	I1029 09:28:42.435597  148690 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1029 09:28:42.435659  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1029 09:28:42.473572  148690 cri.go:89] found id: ""
	I1029 09:28:42.473598  148690 logs.go:282] 0 containers: []
	W1029 09:28:42.473608  148690 logs.go:284] No container was found matching "storage-provisioner"
	I1029 09:28:42.473616  148690 logs.go:123] Gathering logs for kubelet ...
	I1029 09:28:42.473628  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1029 09:28:42.609657  148690 logs.go:123] Gathering logs for dmesg ...
	I1029 09:28:42.609695  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1029 09:28:42.624723  148690 logs.go:123] Gathering logs for describe nodes ...
	I1029 09:28:42.624760  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1029 09:28:42.693056  148690 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1029 09:28:42.693075  148690 logs.go:123] Gathering logs for kube-apiserver [2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4] ...
	I1029 09:28:42.693089  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4"
	I1029 09:28:42.726495  148690 logs.go:123] Gathering logs for kube-scheduler [a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3] ...
	I1029 09:28:42.726531  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3"
	I1029 09:28:42.790242  148690 logs.go:123] Gathering logs for kube-controller-manager [431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa] ...
	I1029 09:28:42.790276  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa"
	I1029 09:28:42.817886  148690 logs.go:123] Gathering logs for CRI-O ...
	I1029 09:28:42.817915  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1029 09:28:42.882476  148690 logs.go:123] Gathering logs for container status ...
	I1029 09:28:42.882516  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1029 09:28:45.426739  148690 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1029 09:28:45.427130  148690 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:28:45.427177  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1029 09:28:45.427232  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1029 09:28:45.454219  148690 cri.go:89] found id: "2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4"
	I1029 09:28:45.454239  148690 cri.go:89] found id: ""
	I1029 09:28:45.454247  148690 logs.go:282] 1 containers: [2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4]
	I1029 09:28:45.454303  148690 ssh_runner.go:195] Run: which crictl
	W1029 09:28:43.906465  164763 pod_ready.go:104] pod "etcd-pause-598473" is not "Ready", error: <nil>
	W1029 09:28:45.907201  164763 pod_ready.go:104] pod "etcd-pause-598473" is not "Ready", error: <nil>
	I1029 09:28:46.405921  164763 pod_ready.go:94] pod "etcd-pause-598473" is "Ready"
	I1029 09:28:46.405951  164763 pod_ready.go:86] duration metric: took 6.505851666s for pod "etcd-pause-598473" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:28:46.408360  164763 pod_ready.go:83] waiting for pod "kube-apiserver-pause-598473" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:28:46.412664  164763 pod_ready.go:94] pod "kube-apiserver-pause-598473" is "Ready"
	I1029 09:28:46.412688  164763 pod_ready.go:86] duration metric: took 4.300265ms for pod "kube-apiserver-pause-598473" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:28:46.414671  164763 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-598473" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:28:46.418935  164763 pod_ready.go:94] pod "kube-controller-manager-pause-598473" is "Ready"
	I1029 09:28:46.418964  164763 pod_ready.go:86] duration metric: took 4.265262ms for pod "kube-controller-manager-pause-598473" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:28:46.421055  164763 pod_ready.go:83] waiting for pod "kube-proxy-tjggg" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:28:46.605237  164763 pod_ready.go:94] pod "kube-proxy-tjggg" is "Ready"
	I1029 09:28:46.605274  164763 pod_ready.go:86] duration metric: took 184.181374ms for pod "kube-proxy-tjggg" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:28:46.804480  164763 pod_ready.go:83] waiting for pod "kube-scheduler-pause-598473" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:28:47.205366  164763 pod_ready.go:94] pod "kube-scheduler-pause-598473" is "Ready"
	I1029 09:28:47.205392  164763 pod_ready.go:86] duration metric: took 400.883438ms for pod "kube-scheduler-pause-598473" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:28:47.205403  164763 pod_ready.go:40] duration metric: took 12.317034406s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:28:47.264684  164763 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1029 09:28:47.267824  164763 out.go:179] * Done! kubectl is now configured to use "pause-598473" cluster and "default" namespace by default
	I1029 09:28:45.461667  148690 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1029 09:28:45.461745  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1029 09:28:45.487795  148690 cri.go:89] found id: ""
	I1029 09:28:45.487861  148690 logs.go:282] 0 containers: []
	W1029 09:28:45.487885  148690 logs.go:284] No container was found matching "etcd"
	I1029 09:28:45.487911  148690 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1029 09:28:45.488003  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1029 09:28:45.515507  148690 cri.go:89] found id: ""
	I1029 09:28:45.515529  148690 logs.go:282] 0 containers: []
	W1029 09:28:45.515537  148690 logs.go:284] No container was found matching "coredns"
	I1029 09:28:45.515543  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1029 09:28:45.515598  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1029 09:28:45.550546  148690 cri.go:89] found id: "a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3"
	I1029 09:28:45.550565  148690 cri.go:89] found id: ""
	I1029 09:28:45.550579  148690 logs.go:282] 1 containers: [a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3]
	I1029 09:28:45.550632  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:45.554194  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1029 09:28:45.554258  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1029 09:28:45.580006  148690 cri.go:89] found id: ""
	I1029 09:28:45.580074  148690 logs.go:282] 0 containers: []
	W1029 09:28:45.580114  148690 logs.go:284] No container was found matching "kube-proxy"
	I1029 09:28:45.580161  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1029 09:28:45.580347  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1029 09:28:45.618262  148690 cri.go:89] found id: "431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa"
	I1029 09:28:45.618284  148690 cri.go:89] found id: ""
	I1029 09:28:45.618292  148690 logs.go:282] 1 containers: [431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa]
	I1029 09:28:45.618345  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:45.622040  148690 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1029 09:28:45.622110  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1029 09:28:45.649201  148690 cri.go:89] found id: ""
	I1029 09:28:45.649229  148690 logs.go:282] 0 containers: []
	W1029 09:28:45.649238  148690 logs.go:284] No container was found matching "kindnet"
	I1029 09:28:45.649245  148690 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1029 09:28:45.649325  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1029 09:28:45.679597  148690 cri.go:89] found id: ""
	I1029 09:28:45.679620  148690 logs.go:282] 0 containers: []
	W1029 09:28:45.679628  148690 logs.go:284] No container was found matching "storage-provisioner"
	I1029 09:28:45.679636  148690 logs.go:123] Gathering logs for kube-apiserver [2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4] ...
	I1029 09:28:45.679668  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4"
	I1029 09:28:45.713784  148690 logs.go:123] Gathering logs for kube-scheduler [a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3] ...
	I1029 09:28:45.713815  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3"
	I1029 09:28:45.777707  148690 logs.go:123] Gathering logs for kube-controller-manager [431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa] ...
	I1029 09:28:45.777741  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa"
	I1029 09:28:45.805286  148690 logs.go:123] Gathering logs for CRI-O ...
	I1029 09:28:45.805363  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1029 09:28:45.867925  148690 logs.go:123] Gathering logs for container status ...
	I1029 09:28:45.867962  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1029 09:28:45.897696  148690 logs.go:123] Gathering logs for kubelet ...
	I1029 09:28:45.897724  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1029 09:28:46.022527  148690 logs.go:123] Gathering logs for dmesg ...
	I1029 09:28:46.022573  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1029 09:28:46.037939  148690 logs.go:123] Gathering logs for describe nodes ...
	I1029 09:28:46.037972  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1029 09:28:46.108935  148690 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1029 09:28:48.609916  148690 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1029 09:28:48.610305  148690 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1029 09:28:48.610352  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1029 09:28:48.610407  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1029 09:28:48.635354  148690 cri.go:89] found id: "2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4"
	I1029 09:28:48.635377  148690 cri.go:89] found id: ""
	I1029 09:28:48.635385  148690 logs.go:282] 1 containers: [2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4]
	I1029 09:28:48.635437  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:48.638976  148690 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1029 09:28:48.639044  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1029 09:28:48.664532  148690 cri.go:89] found id: ""
	I1029 09:28:48.664570  148690 logs.go:282] 0 containers: []
	W1029 09:28:48.664579  148690 logs.go:284] No container was found matching "etcd"
	I1029 09:28:48.664586  148690 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1029 09:28:48.664645  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1029 09:28:48.691734  148690 cri.go:89] found id: ""
	I1029 09:28:48.691760  148690 logs.go:282] 0 containers: []
	W1029 09:28:48.691769  148690 logs.go:284] No container was found matching "coredns"
	I1029 09:28:48.691775  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1029 09:28:48.691835  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1029 09:28:48.718514  148690 cri.go:89] found id: "a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3"
	I1029 09:28:48.718538  148690 cri.go:89] found id: ""
	I1029 09:28:48.718546  148690 logs.go:282] 1 containers: [a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3]
	I1029 09:28:48.718599  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:48.722268  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1029 09:28:48.722336  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1029 09:28:48.763995  148690 cri.go:89] found id: ""
	I1029 09:28:48.764017  148690 logs.go:282] 0 containers: []
	W1029 09:28:48.764025  148690 logs.go:284] No container was found matching "kube-proxy"
	I1029 09:28:48.764034  148690 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1029 09:28:48.764086  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1029 09:28:48.794282  148690 cri.go:89] found id: "431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa"
	I1029 09:28:48.794301  148690 cri.go:89] found id: ""
	I1029 09:28:48.794309  148690 logs.go:282] 1 containers: [431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa]
	I1029 09:28:48.794364  148690 ssh_runner.go:195] Run: which crictl
	I1029 09:28:48.798233  148690 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1029 09:28:48.798298  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1029 09:28:48.842839  148690 cri.go:89] found id: ""
	I1029 09:28:48.842859  148690 logs.go:282] 0 containers: []
	W1029 09:28:48.842867  148690 logs.go:284] No container was found matching "kindnet"
	I1029 09:28:48.842874  148690 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1029 09:28:48.842928  148690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1029 09:28:48.887003  148690 cri.go:89] found id: ""
	I1029 09:28:48.887024  148690 logs.go:282] 0 containers: []
	W1029 09:28:48.887032  148690 logs.go:284] No container was found matching "storage-provisioner"
	I1029 09:28:48.887041  148690 logs.go:123] Gathering logs for describe nodes ...
	I1029 09:28:48.887052  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1029 09:28:48.985701  148690 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1029 09:28:48.985719  148690 logs.go:123] Gathering logs for kube-apiserver [2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4] ...
	I1029 09:28:48.985731  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2bfc1fceb28a423097b03b006c58fc0f281b4ed290b8929ec894f63199c1eec4"
	I1029 09:28:49.037005  148690 logs.go:123] Gathering logs for kube-scheduler [a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3] ...
	I1029 09:28:49.037076  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a92c044c1aa3773523d78f149009eb4433a5b887ec79913fa584edc4361622c3"
	I1029 09:28:49.133258  148690 logs.go:123] Gathering logs for kube-controller-manager [431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa] ...
	I1029 09:28:49.133288  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 431d5f4d3439f91473fb0cfe25728877b20fcfd0c2d4551b37e7cc5649a70eaa"
	I1029 09:28:49.172620  148690 logs.go:123] Gathering logs for CRI-O ...
	I1029 09:28:49.172650  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1029 09:28:49.249206  148690 logs.go:123] Gathering logs for container status ...
	I1029 09:28:49.249302  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1029 09:28:49.310693  148690 logs.go:123] Gathering logs for kubelet ...
	I1029 09:28:49.310720  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1029 09:28:49.483184  148690 logs.go:123] Gathering logs for dmesg ...
	I1029 09:28:49.484382  148690 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	
	
	==> CRI-O <==
	Oct 29 09:28:27 pause-598473 crio[2078]: time="2025-10-29T09:28:27.677300504Z" level=info msg="Starting container: a40e8b1aec6cf4ffb89caaf694a74a0457d97ce61bc68f6103e3910054c0228b" id=f215c47b-5346-4366-b1e8-4735ed9e043d name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:28:27 pause-598473 crio[2078]: time="2025-10-29T09:28:27.687015134Z" level=info msg="Started container" PID=2177 containerID=e240f2f193b4ad7983ce46038c61646263f3c3252a816cfb9eb501adbc10637f description=kube-system/kube-scheduler-pause-598473/kube-scheduler id=4ac0e04a-31f1-4187-8760-7f079d31b187 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0a040c4a177bf9ee8d244be1913d81a25f896f98f02febb02b7fab83d49493eb
	Oct 29 09:28:27 pause-598473 crio[2078]: time="2025-10-29T09:28:27.700243234Z" level=info msg="Started container" PID=2196 containerID=99ffc1b8e15dbc056c0325ee48e8afa68322220323c386b6bedb1c4a2ee5e455 description=kube-system/coredns-66bc5c9577-tkwf6/coredns id=9ce0a19c-2653-4567-a448-e7e96dbc8739 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3b2989e43fe4c1bdc1b7ea5944b350b5906a567b663e7eb596b9de3e752480b3
	Oct 29 09:28:27 pause-598473 crio[2078]: time="2025-10-29T09:28:27.711090402Z" level=info msg="Started container" PID=2206 containerID=a40e8b1aec6cf4ffb89caaf694a74a0457d97ce61bc68f6103e3910054c0228b description=kube-system/kindnet-g6xj4/kindnet-cni id=f215c47b-5346-4366-b1e8-4735ed9e043d name=/runtime.v1.RuntimeService/StartContainer sandboxID=02b7005eb7e20b0048cf4c4ddf689b3a8893f688795655c73399cf993bbe198f
	Oct 29 09:28:27 pause-598473 crio[2078]: time="2025-10-29T09:28:27.735784727Z" level=info msg="Created container 7dda4d8e9247e4c48e0541c08b16da24318bbccc701f139085f1241779fd5f7c: kube-system/kube-apiserver-pause-598473/kube-apiserver" id=97c0c8a5-be04-406a-9814-d03a5f7685d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:28:27 pause-598473 crio[2078]: time="2025-10-29T09:28:27.736619769Z" level=info msg="Starting container: 7dda4d8e9247e4c48e0541c08b16da24318bbccc701f139085f1241779fd5f7c" id=d561dbf1-dd68-4e7d-8e26-f034951d0ad5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:28:27 pause-598473 crio[2078]: time="2025-10-29T09:28:27.738627028Z" level=info msg="Started container" PID=2224 containerID=7dda4d8e9247e4c48e0541c08b16da24318bbccc701f139085f1241779fd5f7c description=kube-system/kube-apiserver-pause-598473/kube-apiserver id=d561dbf1-dd68-4e7d-8e26-f034951d0ad5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b48c4155ce95656fdde97a5e70275c16a46ca3ab3119d3630dc01182571ef2b1
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.061049134Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.064890877Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.064928235Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.064960112Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.069990114Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.070173812Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.0702706Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.073864563Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.073901461Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.073928506Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.077528089Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.077570493Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.07759492Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.082050789Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.08209307Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.08211945Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.085672731Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:28:38 pause-598473 crio[2078]: time="2025-10-29T09:28:38.085717121Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	7dda4d8e9247e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   24 seconds ago       Running             kube-apiserver            1                   b48c4155ce956       kube-apiserver-pause-598473            kube-system
	a40e8b1aec6cf       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   24 seconds ago       Running             kindnet-cni               1                   02b7005eb7e20       kindnet-g6xj4                          kube-system
	99ffc1b8e15db       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   24 seconds ago       Running             coredns                   1                   3b2989e43fe4c       coredns-66bc5c9577-tkwf6               kube-system
	e240f2f193b4a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   24 seconds ago       Running             kube-scheduler            1                   0a040c4a177bf       kube-scheduler-pause-598473            kube-system
	b54a7a1da46f4       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   25 seconds ago       Running             kube-proxy                1                   6b51b34e04bb0       kube-proxy-tjggg                       kube-system
	3f24d3b0d1715       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   25 seconds ago       Running             kube-controller-manager   1                   4250985715d0c       kube-controller-manager-pause-598473   kube-system
	7257e194f3686       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   25 seconds ago       Running             etcd                      1                   bfcf114256b60       etcd-pause-598473                      kube-system
	d384a6fc7e5d0       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   36 seconds ago       Exited              coredns                   0                   3b2989e43fe4c       coredns-66bc5c9577-tkwf6               kube-system
	c190688eeeb79       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   6b51b34e04bb0       kube-proxy-tjggg                       kube-system
	8747eed7a2764       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   02b7005eb7e20       kindnet-g6xj4                          kube-system
	ca16a1729c769       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   0a040c4a177bf       kube-scheduler-pause-598473            kube-system
	8027a2710b597       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   b48c4155ce956       kube-apiserver-pause-598473            kube-system
	331659622ea96       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   4250985715d0c       kube-controller-manager-pause-598473   kube-system
	2413d471a2a42       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   bfcf114256b60       etcd-pause-598473                      kube-system
	
	
	==> coredns [99ffc1b8e15dbc056c0325ee48e8afa68322220323c386b6bedb1c4a2ee5e455] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46759 - 46793 "HINFO IN 8079236798295514612.676057281158168028. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013039044s
	
	
	==> coredns [d384a6fc7e5d0182de7245d870f2c33ac8483358e6f6ac6db5e18ba13fa7d9d8] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59424 - 18203 "HINFO IN 6784855079271061273.5378442763212798112. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024337192s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-598473
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-598473
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=pause-598473
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_27_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:27:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-598473
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:28:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:28:15 +0000   Wed, 29 Oct 2025 09:27:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:28:15 +0000   Wed, 29 Oct 2025 09:27:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:28:15 +0000   Wed, 29 Oct 2025 09:27:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 09:28:15 +0000   Wed, 29 Oct 2025 09:28:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-598473
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                d91088ba-c87b-4c03-af8f-a05de72276c1
	  Boot ID:                    dcb5a4aa-84b0-4edf-aeb7-a96ccf6c0882
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-tkwf6                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     78s
	  kube-system                 etcd-pause-598473                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         83s
	  kube-system                 kindnet-g6xj4                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      78s
	  kube-system                 kube-apiserver-pause-598473             250m (12%)    0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-controller-manager-pause-598473    200m (10%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-proxy-tjggg                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-scheduler-pause-598473             100m (5%)     0 (0%)      0 (0%)           0 (0%)         83s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 77s   kube-proxy       
	  Normal   Starting                 18s   kube-proxy       
	  Normal   Starting                 83s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 83s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  83s   kubelet          Node pause-598473 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    83s   kubelet          Node pause-598473 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     83s   kubelet          Node pause-598473 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           79s   node-controller  Node pause-598473 event: Registered Node pause-598473 in Controller
	  Normal   NodeReady                37s   kubelet          Node pause-598473 status is now: NodeReady
	  Normal   RegisteredNode           16s   node-controller  Node pause-598473 event: Registered Node pause-598473 in Controller
	
	
	==> dmesg <==
	[Oct29 08:48] overlayfs: idmapped layers are currently not supported
	[Oct29 08:56] overlayfs: idmapped layers are currently not supported
	[  +3.225081] overlayfs: idmapped layers are currently not supported
	[Oct29 08:57] overlayfs: idmapped layers are currently not supported
	[Oct29 08:58] overlayfs: idmapped layers are currently not supported
	[Oct29 08:59] overlayfs: idmapped layers are currently not supported
	[Oct29 09:04] overlayfs: idmapped layers are currently not supported
	[Oct29 09:05] overlayfs: idmapped layers are currently not supported
	[Oct29 09:06] overlayfs: idmapped layers are currently not supported
	[Oct29 09:07] overlayfs: idmapped layers are currently not supported
	[Oct29 09:08] overlayfs: idmapped layers are currently not supported
	[Oct29 09:10] overlayfs: idmapped layers are currently not supported
	[ +24.018500] overlayfs: idmapped layers are currently not supported
	[  +4.070732] overlayfs: idmapped layers are currently not supported
	[Oct29 09:11] overlayfs: idmapped layers are currently not supported
	[ +18.424492] overlayfs: idmapped layers are currently not supported
	[  +4.342269] hrtimer: interrupt took 2289025 ns
	[Oct29 09:12] overlayfs: idmapped layers are currently not supported
	[Oct29 09:13] overlayfs: idmapped layers are currently not supported
	[Oct29 09:14] overlayfs: idmapped layers are currently not supported
	[Oct29 09:20] overlayfs: idmapped layers are currently not supported
	[Oct29 09:23] overlayfs: idmapped layers are currently not supported
	[Oct29 09:24] overlayfs: idmapped layers are currently not supported
	[ +30.917844] overlayfs: idmapped layers are currently not supported
	[Oct29 09:27] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [2413d471a2a4209e069fb08050610258c0805f09213c5cf465ffa1c188508fa8] <==
	{"level":"warn","ts":"2025-10-29T09:27:25.398272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:27:25.429190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:27:25.478134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:27:25.494991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:27:25.524873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:27:25.557526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:27:25.688653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59266","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-29T09:28:20.191702Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-29T09:28:20.191758Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-598473","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-10-29T09:28:20.191855Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-29T09:28:20.341585Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-29T09:28:20.341691Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-29T09:28:20.341727Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-10-29T09:28:20.341818Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-29T09:28:20.341838Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-29T09:28:20.341912Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-29T09:28:20.341981Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-29T09:28:20.342015Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-29T09:28:20.342083Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-29T09:28:20.342095Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-29T09:28:20.342103Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-29T09:28:20.345023Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-10-29T09:28:20.345101Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-29T09:28:20.345173Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-29T09:28:20.345200Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-598473","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [7257e194f3686f6d742fd1cd0d89139b8bd26bf067856ef661f029216e99b096] <==
	{"level":"warn","ts":"2025-10-29T09:28:31.656929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:31.681995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:31.703173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:31.715478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:31.734455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:31.749565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:31.799134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:31.800448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:31.831591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:31.861193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:31.877086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:31.889629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:31.906732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:31.928697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:31.953993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:31.973812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:32.000140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:32.015623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:32.050288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:32.113790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:32.133015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:32.182549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:32.215015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:32.230912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:28:32.418506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45656","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:28:52 up  1:11,  0 user,  load average: 2.05, 2.59, 2.02
	Linux pause-598473 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8747eed7a27641339a70bdff96979ff32978a82c63e891fbc1950d2e489f7e1c] <==
	I1029 09:27:35.453934       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:27:35.454181       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1029 09:27:35.454363       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:27:35.454385       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:27:35.454399       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:27:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:27:35.655381       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:27:35.655410       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:27:35.655419       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:27:35.655517       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1029 09:28:05.655030       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1029 09:28:05.655229       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1029 09:28:05.655347       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1029 09:28:05.745907       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1029 09:28:06.955865       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 09:28:06.955897       1 metrics.go:72] Registering metrics
	I1029 09:28:06.955968       1 controller.go:711] "Syncing nftables rules"
	I1029 09:28:15.661431       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1029 09:28:15.661487       1 main.go:301] handling current node
	
	
	==> kindnet [a40e8b1aec6cf4ffb89caaf694a74a0457d97ce61bc68f6103e3910054c0228b] <==
	I1029 09:28:27.814198       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:28:27.846073       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1029 09:28:27.846202       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:28:27.846214       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:28:27.846228       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:28:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:28:28.057538       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:28:28.057602       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:28:28.057662       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:28:28.058407       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1029 09:28:33.958718       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 09:28:33.958813       1 metrics.go:72] Registering metrics
	I1029 09:28:33.958903       1 controller.go:711] "Syncing nftables rules"
	I1029 09:28:38.060516       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1029 09:28:38.060699       1 main.go:301] handling current node
	I1029 09:28:48.057922       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1029 09:28:48.057954       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7dda4d8e9247e4c48e0541c08b16da24318bbccc701f139085f1241779fd5f7c] <==
	I1029 09:28:33.873609       1 policy_source.go:240] refreshing policies
	I1029 09:28:33.874774       1 aggregator.go:171] initial CRD sync complete...
	I1029 09:28:33.874843       1 autoregister_controller.go:144] Starting autoregister controller
	I1029 09:28:33.874891       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1029 09:28:33.902222       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1029 09:28:33.924178       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1029 09:28:33.924247       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1029 09:28:33.933928       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1029 09:28:33.949722       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1029 09:28:33.953659       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1029 09:28:33.956662       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1029 09:28:33.957610       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1029 09:28:33.957632       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1029 09:28:33.958165       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1029 09:28:33.958353       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1029 09:28:33.963888       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1029 09:28:33.964510       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1029 09:28:33.965792       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:28:33.976516       1 cache.go:39] Caches are synced for autoregister controller
	I1029 09:28:34.371514       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:28:34.708918       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 09:28:36.267389       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1029 09:28:36.302210       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1029 09:28:36.450983       1 controller.go:667] quota admission added evaluator for: endpoints
	I1029 09:28:36.504478       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [8027a2710b597803179f3d65c316ab838f3c511999b97880ec0b1f59441db3cd] <==
	W1029 09:28:20.209469       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.209544       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.209622       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.209702       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.209888       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.209978       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210035       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210087       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210137       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210184       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210233       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210285       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210334       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210383       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210406       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210429       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210456       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210477       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210502       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210546       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210585       1 logging.go:55] [core] [Channel #17 SubChannel #21]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210610       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210642       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:28:20.210586       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [331659622ea96eb65f7a270e3e1d8f8fa9f2d2eddfd4e3e8bba99a26abb753dd] <==
	I1029 09:27:33.545071       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1029 09:27:33.545699       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1029 09:27:33.546148       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1029 09:27:33.546308       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1029 09:27:33.546398       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1029 09:27:33.546668       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1029 09:27:33.546858       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1029 09:27:33.546901       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1029 09:27:33.548383       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1029 09:27:33.548460       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1029 09:27:33.548470       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1029 09:27:33.552120       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1029 09:27:33.552395       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1029 09:27:33.552654       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:27:33.554961       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1029 09:27:33.555097       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1029 09:27:33.555160       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1029 09:27:33.555196       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1029 09:27:33.555224       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1029 09:27:33.560640       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1029 09:27:33.561640       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1029 09:27:33.564369       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1029 09:27:33.573140       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1029 09:27:33.576668       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-598473" podCIDRs=["10.244.0.0/24"]
	I1029 09:28:18.509243       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [3f24d3b0d17159e35ec8ac73b72ecde2d13c87c4ce788a4d8aece1755628f8b4] <==
	I1029 09:28:36.204020       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1029 09:28:36.204064       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1029 09:28:36.208803       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1029 09:28:36.209125       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1029 09:28:36.209436       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1029 09:28:36.209558       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-598473"
	I1029 09:28:36.209636       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1029 09:28:36.210192       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1029 09:28:36.212439       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1029 09:28:36.212897       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:28:36.213909       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:28:36.217910       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:28:36.221979       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1029 09:28:36.232259       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1029 09:28:36.232438       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1029 09:28:36.234368       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1029 09:28:36.236402       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1029 09:28:36.244460       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1029 09:28:36.244492       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1029 09:28:36.244573       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1029 09:28:36.244864       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1029 09:28:36.255772       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1029 09:28:36.258028       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1029 09:28:36.261699       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1029 09:28:36.275449       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	
	
	==> kube-proxy [b54a7a1da46f4878031777e1d18042b4b4bba0e73a5204cb18e65a98dfe4bf56] <==
	I1029 09:28:30.907604       1 server_linux.go:53] "Using iptables proxy"
	I1029 09:28:31.543690       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 09:28:34.040880       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 09:28:34.040996       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1029 09:28:34.041277       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 09:28:34.169545       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:28:34.169674       1 server_linux.go:132] "Using iptables Proxier"
	I1029 09:28:34.177604       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 09:28:34.178011       1 server.go:527] "Version info" version="v1.34.1"
	I1029 09:28:34.178243       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:28:34.180196       1 config.go:200] "Starting service config controller"
	I1029 09:28:34.180220       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 09:28:34.180245       1 config.go:106] "Starting endpoint slice config controller"
	I1029 09:28:34.180251       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 09:28:34.180263       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 09:28:34.180267       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 09:28:34.181016       1 config.go:309] "Starting node config controller"
	I1029 09:28:34.181039       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 09:28:34.181045       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 09:28:34.281127       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 09:28:34.281232       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 09:28:34.281325       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [c190688eeeb79e9c923c6ec33de1858543704894afd50ecdb214f8e4111e298c] <==
	I1029 09:27:35.490812       1 server_linux.go:53] "Using iptables proxy"
	I1029 09:27:35.578117       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 09:27:35.679096       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 09:27:35.679133       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1029 09:27:35.679222       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 09:27:35.699225       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:27:35.699279       1 server_linux.go:132] "Using iptables Proxier"
	I1029 09:27:35.702815       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 09:27:35.703105       1 server.go:527] "Version info" version="v1.34.1"
	I1029 09:27:35.703127       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:27:35.711056       1 config.go:200] "Starting service config controller"
	I1029 09:27:35.711134       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 09:27:35.711176       1 config.go:106] "Starting endpoint slice config controller"
	I1029 09:27:35.711203       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 09:27:35.711239       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 09:27:35.711265       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 09:27:35.711788       1 config.go:309] "Starting node config controller"
	I1029 09:27:35.711854       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 09:27:35.711884       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 09:27:35.811337       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 09:27:35.811337       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 09:27:35.811356       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ca16a1729c7691f2ea4057d58e8323e20627757b080269568c8ba95cd450fa92] <==
	E1029 09:27:26.700946       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1029 09:27:26.700978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1029 09:27:26.701016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1029 09:27:26.701053       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1029 09:27:26.701086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1029 09:27:26.701118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1029 09:27:26.701192       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1029 09:27:27.593778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1029 09:27:27.653464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1029 09:27:27.657790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1029 09:27:27.662683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1029 09:27:27.676596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1029 09:27:27.709979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1029 09:27:27.723507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1029 09:27:27.779405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1029 09:27:27.866203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1029 09:27:28.041053       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1029 09:27:28.041975       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1029 09:27:31.079177       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:28:20.199889       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1029 09:28:20.199916       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1029 09:28:20.199936       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1029 09:28:20.199962       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:28:20.200154       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1029 09:28:20.200168       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e240f2f193b4ad7983ce46038c61646263f3c3252a816cfb9eb501adbc10637f] <==
	I1029 09:28:31.210010       1 serving.go:386] Generated self-signed cert in-memory
	I1029 09:28:34.119592       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1029 09:28:34.119815       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:28:34.129040       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1029 09:28:34.129123       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1029 09:28:34.129154       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1029 09:28:34.129192       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1029 09:28:34.130098       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:28:34.130122       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:28:34.130147       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1029 09:28:34.130153       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1029 09:28:34.230099       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1029 09:28:34.230391       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1029 09:28:34.230454       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 09:28:27 pause-598473 kubelet[1324]: E1029 09:28:27.400011    1324 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-598473\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="aa94bcd67947651441aca381d72c4325" pod="kube-system/kube-apiserver-pause-598473"
	Oct 29 09:28:27 pause-598473 kubelet[1324]: E1029 09:28:27.400400    1324 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-598473\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="dc7b8d07e8fe613d9df9fb2b0671eba8" pod="kube-system/kube-controller-manager-pause-598473"
	Oct 29 09:28:27 pause-598473 kubelet[1324]: E1029 09:28:27.400715    1324 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-g6xj4\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="73a37546-9547-4ab6-a47d-2ba7197a11f5" pod="kube-system/kindnet-g6xj4"
	Oct 29 09:28:27 pause-598473 kubelet[1324]: E1029 09:28:27.401024    1324 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tjggg\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d87db520-c253-4583-9374-28fcc707d1dd" pod="kube-system/kube-proxy-tjggg"
	Oct 29 09:28:27 pause-598473 kubelet[1324]: E1029 09:28:27.401326    1324 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-tkwf6\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="8d843afb-d055-43fc-92e1-8816da3ab88b" pod="kube-system/coredns-66bc5c9577-tkwf6"
	Oct 29 09:28:27 pause-598473 kubelet[1324]: E1029 09:28:27.401631    1324 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-598473\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fbf0ebf2a238ffdd0e89a1759ec74d86" pod="kube-system/kube-scheduler-pause-598473"
	Oct 29 09:28:27 pause-598473 kubelet[1324]: I1029 09:28:27.428210    1324 scope.go:117] "RemoveContainer" containerID="8027a2710b597803179f3d65c316ab838f3c511999b97880ec0b1f59441db3cd"
	Oct 29 09:28:27 pause-598473 kubelet[1324]: E1029 09:28:27.429034    1324 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-598473\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3948f65fe872b1a95aa82526180d497a" pod="kube-system/etcd-pause-598473"
	Oct 29 09:28:27 pause-598473 kubelet[1324]: E1029 09:28:27.429311    1324 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-598473\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="aa94bcd67947651441aca381d72c4325" pod="kube-system/kube-apiserver-pause-598473"
	Oct 29 09:28:27 pause-598473 kubelet[1324]: E1029 09:28:27.429530    1324 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-598473\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="dc7b8d07e8fe613d9df9fb2b0671eba8" pod="kube-system/kube-controller-manager-pause-598473"
	Oct 29 09:28:27 pause-598473 kubelet[1324]: E1029 09:28:27.429775    1324 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-g6xj4\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="73a37546-9547-4ab6-a47d-2ba7197a11f5" pod="kube-system/kindnet-g6xj4"
	Oct 29 09:28:27 pause-598473 kubelet[1324]: E1029 09:28:27.430610    1324 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tjggg\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d87db520-c253-4583-9374-28fcc707d1dd" pod="kube-system/kube-proxy-tjggg"
	Oct 29 09:28:27 pause-598473 kubelet[1324]: E1029 09:28:27.431006    1324 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-tkwf6\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="8d843afb-d055-43fc-92e1-8816da3ab88b" pod="kube-system/coredns-66bc5c9577-tkwf6"
	Oct 29 09:28:27 pause-598473 kubelet[1324]: E1029 09:28:27.431197    1324 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-598473\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fbf0ebf2a238ffdd0e89a1759ec74d86" pod="kube-system/kube-scheduler-pause-598473"
	Oct 29 09:28:33 pause-598473 kubelet[1324]: E1029 09:28:33.440938    1324 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-598473\" is forbidden: User \"system:node:pause-598473\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-598473' and this object" podUID="fbf0ebf2a238ffdd0e89a1759ec74d86" pod="kube-system/kube-scheduler-pause-598473"
	Oct 29 09:28:33 pause-598473 kubelet[1324]: E1029 09:28:33.441731    1324 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-598473\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-598473' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 29 09:28:33 pause-598473 kubelet[1324]: E1029 09:28:33.441876    1324 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-598473\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-598473' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 29 09:28:33 pause-598473 kubelet[1324]: E1029 09:28:33.441979    1324 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-598473\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-598473' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 29 09:28:33 pause-598473 kubelet[1324]: E1029 09:28:33.585829    1324 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-598473\" is forbidden: User \"system:node:pause-598473\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-598473' and this object" podUID="3948f65fe872b1a95aa82526180d497a" pod="kube-system/etcd-pause-598473"
	Oct 29 09:28:33 pause-598473 kubelet[1324]: E1029 09:28:33.776682    1324 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-598473\" is forbidden: User \"system:node:pause-598473\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-598473' and this object" podUID="aa94bcd67947651441aca381d72c4325" pod="kube-system/kube-apiserver-pause-598473"
	Oct 29 09:28:33 pause-598473 kubelet[1324]: E1029 09:28:33.859630    1324 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-598473\" is forbidden: User \"system:node:pause-598473\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-598473' and this object" podUID="dc7b8d07e8fe613d9df9fb2b0671eba8" pod="kube-system/kube-controller-manager-pause-598473"
	Oct 29 09:28:39 pause-598473 kubelet[1324]: W1029 09:28:39.492236    1324 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 29 09:28:47 pause-598473 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 29 09:28:47 pause-598473 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 29 09:28:47 pause-598473 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-598473 -n pause-598473
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-598473 -n pause-598473: exit status 2 (359.57826ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-598473 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-162751 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-162751 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (248.528725ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:32:19Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-162751 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-162751 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-162751 describe deploy/metrics-server -n kube-system: exit status 1 (79.279264ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-162751 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-162751
helpers_test.go:243: (dbg) docker inspect old-k8s-version-162751:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ff565e88a53d8edb5070c5f93c9d1fefa68d9886105b4ed571d8aace0997aad2",
	        "Created": "2025-10-29T09:31:17.309145207Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 181750,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:31:17.381570138Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/ff565e88a53d8edb5070c5f93c9d1fefa68d9886105b4ed571d8aace0997aad2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ff565e88a53d8edb5070c5f93c9d1fefa68d9886105b4ed571d8aace0997aad2/hostname",
	        "HostsPath": "/var/lib/docker/containers/ff565e88a53d8edb5070c5f93c9d1fefa68d9886105b4ed571d8aace0997aad2/hosts",
	        "LogPath": "/var/lib/docker/containers/ff565e88a53d8edb5070c5f93c9d1fefa68d9886105b4ed571d8aace0997aad2/ff565e88a53d8edb5070c5f93c9d1fefa68d9886105b4ed571d8aace0997aad2-json.log",
	        "Name": "/old-k8s-version-162751",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-162751:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-162751",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ff565e88a53d8edb5070c5f93c9d1fefa68d9886105b4ed571d8aace0997aad2",
	                "LowerDir": "/var/lib/docker/overlay2/04ad89da0567c27cf19c3a878c1a373075d3240512b0417dad3b82758bcec18e-init/diff:/var/lib/docker/overlay2/512c003c31c6c889d7180677d0a7d04f9641d651bb7c0f829b95ca7e47c2836c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/04ad89da0567c27cf19c3a878c1a373075d3240512b0417dad3b82758bcec18e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/04ad89da0567c27cf19c3a878c1a373075d3240512b0417dad3b82758bcec18e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/04ad89da0567c27cf19c3a878c1a373075d3240512b0417dad3b82758bcec18e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-162751",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-162751/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-162751",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-162751",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-162751",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e96d4ec66141d19dcdec6b8d8721318e7441a21f887071ab9daf0ebc8a728b25",
	            "SandboxKey": "/var/run/docker/netns/e96d4ec66141",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33043"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33044"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33047"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33045"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33046"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-162751": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:29:b7:98:a7:23",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b39d4ca145f787b1920f94a4f3933ceac95f90f60a1cf8cbdf99d14ff53419fa",
	                    "EndpointID": "32659d73235b6b90e926e1e97321ca189e404bfa50032a0b481a32ddc0e9573a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-162751",
	                        "ff565e88a53d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-162751 -n old-k8s-version-162751
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-162751 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-162751 logs -n 25: (1.194571791s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-937200 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo docker system info                                                                                                                                                                                                      │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo containerd config dump                                                                                                                                                                                                  │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo crio config                                                                                                                                                                                                             │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ delete  │ -p cilium-937200                                                                                                                                                                                                                              │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │ 29 Oct 25 09:29 UTC │
	│ start   │ -p cert-expiration-690444 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-690444   │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │ 29 Oct 25 09:30 UTC │
	│ delete  │ -p force-systemd-env-116185                                                                                                                                                                                                                   │ force-systemd-env-116185 │ jenkins │ v1.37.0 │ 29 Oct 25 09:30 UTC │ 29 Oct 25 09:30 UTC │
	│ start   │ -p cert-options-699236 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-699236      │ jenkins │ v1.37.0 │ 29 Oct 25 09:30 UTC │ 29 Oct 25 09:31 UTC │
	│ ssh     │ cert-options-699236 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-699236      │ jenkins │ v1.37.0 │ 29 Oct 25 09:31 UTC │ 29 Oct 25 09:31 UTC │
	│ ssh     │ -p cert-options-699236 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-699236      │ jenkins │ v1.37.0 │ 29 Oct 25 09:31 UTC │ 29 Oct 25 09:31 UTC │
	│ delete  │ -p cert-options-699236                                                                                                                                                                                                                        │ cert-options-699236      │ jenkins │ v1.37.0 │ 29 Oct 25 09:31 UTC │ 29 Oct 25 09:31 UTC │
	│ start   │ -p old-k8s-version-162751 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:31 UTC │ 29 Oct 25 09:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-162751 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:32 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:31:11
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:31:11.212419  181296 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:31:11.212674  181296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:31:11.212703  181296 out.go:374] Setting ErrFile to fd 2...
	I1029 09:31:11.212723  181296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:31:11.213023  181296 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 09:31:11.213512  181296 out.go:368] Setting JSON to false
	I1029 09:31:11.214470  181296 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4423,"bootTime":1761725848,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1029 09:31:11.214561  181296 start.go:143] virtualization:  
	I1029 09:31:11.220965  181296 out.go:179] * [old-k8s-version-162751] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1029 09:31:11.224267  181296 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:31:11.224369  181296 notify.go:221] Checking for updates...
	I1029 09:31:11.230540  181296 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:31:11.233590  181296 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:31:11.236739  181296 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	I1029 09:31:11.239947  181296 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1029 09:31:11.243056  181296 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:31:11.246661  181296 config.go:182] Loaded profile config "cert-expiration-690444": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:31:11.246776  181296 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:31:11.268344  181296 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1029 09:31:11.268462  181296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:31:11.332603  181296 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-29 09:31:11.322885049 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:31:11.332712  181296 docker.go:319] overlay module found
	I1029 09:31:11.335887  181296 out.go:179] * Using the docker driver based on user configuration
	I1029 09:31:11.338724  181296 start.go:309] selected driver: docker
	I1029 09:31:11.338748  181296 start.go:930] validating driver "docker" against <nil>
	I1029 09:31:11.338763  181296 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:31:11.339512  181296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:31:11.395238  181296 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-29 09:31:11.385171691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:31:11.395407  181296 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1029 09:31:11.395633  181296 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:31:11.398899  181296 out.go:179] * Using Docker driver with root privileges
	I1029 09:31:11.401816  181296 cni.go:84] Creating CNI manager for ""
	I1029 09:31:11.401891  181296 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:31:11.401910  181296 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1029 09:31:11.401993  181296 start.go:353] cluster config:
	{Name:old-k8s-version-162751 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-162751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:31:11.405642  181296 out.go:179] * Starting "old-k8s-version-162751" primary control-plane node in "old-k8s-version-162751" cluster
	I1029 09:31:11.408463  181296 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:31:11.411431  181296 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:31:11.414349  181296 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1029 09:31:11.414404  181296 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1029 09:31:11.414418  181296 cache.go:59] Caching tarball of preloaded images
	I1029 09:31:11.414524  181296 preload.go:233] Found /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1029 09:31:11.414542  181296 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1029 09:31:11.414648  181296 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/config.json ...
	I1029 09:31:11.414671  181296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/config.json: {Name:mka4cf8b47b6a22d2481a59fd9ac8600d6201ffc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:31:11.414829  181296 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:31:11.434086  181296 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:31:11.434108  181296 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:31:11.434122  181296 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:31:11.434148  181296 start.go:360] acquireMachinesLock for old-k8s-version-162751: {Name:mkef74f21f909eed25e0f740aa2a9102a6f5c724 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:31:11.434251  181296 start.go:364] duration metric: took 83.078µs to acquireMachinesLock for "old-k8s-version-162751"
	I1029 09:31:11.434284  181296 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-162751 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-162751 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:31:11.434380  181296 start.go:125] createHost starting for "" (driver="docker")
	I1029 09:31:11.437845  181296 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1029 09:31:11.438079  181296 start.go:159] libmachine.API.Create for "old-k8s-version-162751" (driver="docker")
	I1029 09:31:11.438117  181296 client.go:173] LocalClient.Create starting
	I1029 09:31:11.438201  181296 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem
	I1029 09:31:11.438239  181296 main.go:143] libmachine: Decoding PEM data...
	I1029 09:31:11.438257  181296 main.go:143] libmachine: Parsing certificate...
	I1029 09:31:11.438315  181296 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem
	I1029 09:31:11.438337  181296 main.go:143] libmachine: Decoding PEM data...
	I1029 09:31:11.438350  181296 main.go:143] libmachine: Parsing certificate...
	I1029 09:31:11.438717  181296 cli_runner.go:164] Run: docker network inspect old-k8s-version-162751 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1029 09:31:11.454380  181296 cli_runner.go:211] docker network inspect old-k8s-version-162751 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1029 09:31:11.454466  181296 network_create.go:284] running [docker network inspect old-k8s-version-162751] to gather additional debugging logs...
	I1029 09:31:11.454489  181296 cli_runner.go:164] Run: docker network inspect old-k8s-version-162751
	W1029 09:31:11.474630  181296 cli_runner.go:211] docker network inspect old-k8s-version-162751 returned with exit code 1
	I1029 09:31:11.474658  181296 network_create.go:287] error running [docker network inspect old-k8s-version-162751]: docker network inspect old-k8s-version-162751: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-162751 not found
	I1029 09:31:11.474671  181296 network_create.go:289] output of [docker network inspect old-k8s-version-162751]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-162751 not found
	
	** /stderr **
	I1029 09:31:11.474767  181296 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:31:11.491487  181296 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0687088684ea IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8e:e2:78:39:db:9c} reservation:<nil>}
	I1029 09:31:11.491793  181296 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b2a2304196dd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8e:c9:a9:e0:d0:7a} reservation:<nil>}
	I1029 09:31:11.492640  181296 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e863a0178057 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:86:09:fc:5e:55} reservation:<nil>}
	I1029 09:31:11.493054  181296 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a2d280}
	I1029 09:31:11.493072  181296 network_create.go:124] attempt to create docker network old-k8s-version-162751 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1029 09:31:11.493129  181296 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-162751 old-k8s-version-162751
	I1029 09:31:11.553163  181296 network_create.go:108] docker network old-k8s-version-162751 192.168.76.0/24 created
	I1029 09:31:11.553213  181296 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-162751" container
	I1029 09:31:11.553287  181296 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1029 09:31:11.570245  181296 cli_runner.go:164] Run: docker volume create old-k8s-version-162751 --label name.minikube.sigs.k8s.io=old-k8s-version-162751 --label created_by.minikube.sigs.k8s.io=true
	I1029 09:31:11.588886  181296 oci.go:103] Successfully created a docker volume old-k8s-version-162751
	I1029 09:31:11.588966  181296 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-162751-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-162751 --entrypoint /usr/bin/test -v old-k8s-version-162751:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1029 09:31:12.141211  181296 oci.go:107] Successfully prepared a docker volume old-k8s-version-162751
	I1029 09:31:12.141300  181296 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1029 09:31:12.141328  181296 kic.go:194] Starting extracting preloaded images to volume ...
	I1029 09:31:12.141433  181296 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-162751:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1029 09:31:17.169206  181296 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-162751:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.027733931s)
	I1029 09:31:17.169234  181296 kic.go:203] duration metric: took 5.027912272s to extract preloaded images to volume ...
	W1029 09:31:17.169375  181296 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1029 09:31:17.169480  181296 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1029 09:31:17.286277  181296 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-162751 --name old-k8s-version-162751 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-162751 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-162751 --network old-k8s-version-162751 --ip 192.168.76.2 --volume old-k8s-version-162751:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1029 09:31:17.634718  181296 cli_runner.go:164] Run: docker container inspect old-k8s-version-162751 --format={{.State.Running}}
	I1029 09:31:17.662205  181296 cli_runner.go:164] Run: docker container inspect old-k8s-version-162751 --format={{.State.Status}}
	I1029 09:31:17.685823  181296 cli_runner.go:164] Run: docker exec old-k8s-version-162751 stat /var/lib/dpkg/alternatives/iptables
	I1029 09:31:17.746894  181296 oci.go:144] the created container "old-k8s-version-162751" has a running status.
	I1029 09:31:17.746939  181296 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/old-k8s-version-162751/id_rsa...
	I1029 09:31:18.286426  181296 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21800-2763/.minikube/machines/old-k8s-version-162751/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1029 09:31:18.322016  181296 cli_runner.go:164] Run: docker container inspect old-k8s-version-162751 --format={{.State.Status}}
	I1029 09:31:18.346285  181296 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1029 09:31:18.346304  181296 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-162751 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1029 09:31:18.412562  181296 cli_runner.go:164] Run: docker container inspect old-k8s-version-162751 --format={{.State.Status}}
	I1029 09:31:18.439187  181296 machine.go:94] provisionDockerMachine start ...
	I1029 09:31:18.439298  181296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-162751
	I1029 09:31:18.479534  181296 main.go:143] libmachine: Using SSH client type: native
	I1029 09:31:18.479870  181296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33043 <nil> <nil>}
	I1029 09:31:18.479894  181296 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 09:31:18.680170  181296 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-162751
	
	I1029 09:31:18.680208  181296 ubuntu.go:182] provisioning hostname "old-k8s-version-162751"
	I1029 09:31:18.680279  181296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-162751
	I1029 09:31:18.707152  181296 main.go:143] libmachine: Using SSH client type: native
	I1029 09:31:18.707452  181296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33043 <nil> <nil>}
	I1029 09:31:18.707466  181296 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-162751 && echo "old-k8s-version-162751" | sudo tee /etc/hostname
	I1029 09:31:18.880114  181296 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-162751
	
	I1029 09:31:18.880258  181296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-162751
	I1029 09:31:18.901430  181296 main.go:143] libmachine: Using SSH client type: native
	I1029 09:31:18.901750  181296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33043 <nil> <nil>}
	I1029 09:31:18.901779  181296 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-162751' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-162751/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-162751' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 09:31:19.064920  181296 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 09:31:19.064943  181296 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-2763/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-2763/.minikube}
	I1029 09:31:19.064975  181296 ubuntu.go:190] setting up certificates
	I1029 09:31:19.064986  181296 provision.go:84] configureAuth start
	I1029 09:31:19.065044  181296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-162751
	I1029 09:31:19.083102  181296 provision.go:143] copyHostCerts
	I1029 09:31:19.083172  181296 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem, removing ...
	I1029 09:31:19.083186  181296 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 09:31:19.083269  181296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem (1082 bytes)
	I1029 09:31:19.083403  181296 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem, removing ...
	I1029 09:31:19.083415  181296 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 09:31:19.083446  181296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem (1123 bytes)
	I1029 09:31:19.083512  181296 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem, removing ...
	I1029 09:31:19.083521  181296 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 09:31:19.083547  181296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem (1679 bytes)
	I1029 09:31:19.083604  181296 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-162751 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-162751]
	I1029 09:31:19.440479  181296 provision.go:177] copyRemoteCerts
	I1029 09:31:19.440549  181296 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 09:31:19.440593  181296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-162751
	I1029 09:31:19.459108  181296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/old-k8s-version-162751/id_rsa Username:docker}
	I1029 09:31:19.568188  181296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 09:31:19.585960  181296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1029 09:31:19.608168  181296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1029 09:31:19.626286  181296 provision.go:87] duration metric: took 561.275953ms to configureAuth
	I1029 09:31:19.626322  181296 ubuntu.go:206] setting minikube options for container-runtime
	I1029 09:31:19.626533  181296 config.go:182] Loaded profile config "old-k8s-version-162751": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1029 09:31:19.626655  181296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-162751
	I1029 09:31:19.646279  181296 main.go:143] libmachine: Using SSH client type: native
	I1029 09:31:19.646602  181296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33043 <nil> <nil>}
	I1029 09:31:19.646617  181296 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 09:31:19.909031  181296 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 09:31:19.909052  181296 machine.go:97] duration metric: took 1.4698462s to provisionDockerMachine
	I1029 09:31:19.909062  181296 client.go:176] duration metric: took 8.470934683s to LocalClient.Create
	I1029 09:31:19.909075  181296 start.go:167] duration metric: took 8.470997296s to libmachine.API.Create "old-k8s-version-162751"
	I1029 09:31:19.909083  181296 start.go:293] postStartSetup for "old-k8s-version-162751" (driver="docker")
	I1029 09:31:19.909092  181296 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 09:31:19.909180  181296 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 09:31:19.909218  181296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-162751
	I1029 09:31:19.927737  181296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/old-k8s-version-162751/id_rsa Username:docker}
	I1029 09:31:20.033166  181296 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 09:31:20.036886  181296 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 09:31:20.036918  181296 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 09:31:20.036933  181296 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/addons for local assets ...
	I1029 09:31:20.037010  181296 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/files for local assets ...
	I1029 09:31:20.037102  181296 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> 45502.pem in /etc/ssl/certs
	I1029 09:31:20.037221  181296 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 09:31:20.045608  181296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /etc/ssl/certs/45502.pem (1708 bytes)
	I1029 09:31:20.065844  181296 start.go:296] duration metric: took 156.747361ms for postStartSetup
	I1029 09:31:20.066235  181296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-162751
	I1029 09:31:20.087212  181296 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/config.json ...
	I1029 09:31:20.087503  181296 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 09:31:20.087555  181296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-162751
	I1029 09:31:20.107742  181296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/old-k8s-version-162751/id_rsa Username:docker}
	I1029 09:31:20.209244  181296 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 09:31:20.213858  181296 start.go:128] duration metric: took 8.779461375s to createHost
	I1029 09:31:20.213882  181296 start.go:83] releasing machines lock for "old-k8s-version-162751", held for 8.779615863s
	I1029 09:31:20.213989  181296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-162751
	I1029 09:31:20.231470  181296 ssh_runner.go:195] Run: cat /version.json
	I1029 09:31:20.231520  181296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-162751
	I1029 09:31:20.231531  181296 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 09:31:20.231598  181296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-162751
	I1029 09:31:20.250973  181296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/old-k8s-version-162751/id_rsa Username:docker}
	I1029 09:31:20.262040  181296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/old-k8s-version-162751/id_rsa Username:docker}
	I1029 09:31:20.356130  181296 ssh_runner.go:195] Run: systemctl --version
	I1029 09:31:20.448689  181296 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 09:31:20.491134  181296 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 09:31:20.495982  181296 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 09:31:20.496079  181296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 09:31:20.527525  181296 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1029 09:31:20.527547  181296 start.go:496] detecting cgroup driver to use...
	I1029 09:31:20.527578  181296 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1029 09:31:20.527636  181296 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 09:31:20.547598  181296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 09:31:20.560774  181296 docker.go:218] disabling cri-docker service (if available) ...
	I1029 09:31:20.560839  181296 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 09:31:20.578136  181296 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 09:31:20.598850  181296 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 09:31:20.731780  181296 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 09:31:20.873708  181296 docker.go:234] disabling docker service ...
	I1029 09:31:20.873783  181296 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 09:31:20.897172  181296 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 09:31:20.911654  181296 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 09:31:21.044284  181296 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 09:31:21.177592  181296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 09:31:21.192527  181296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 09:31:21.209090  181296 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1029 09:31:21.209181  181296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:31:21.217918  181296 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1029 09:31:21.217985  181296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:31:21.227591  181296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:31:21.237028  181296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:31:21.246443  181296 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 09:31:21.254828  181296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:31:21.264049  181296 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:31:21.278036  181296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:31:21.287646  181296 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 09:31:21.295789  181296 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 09:31:21.303540  181296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:31:21.422798  181296 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 09:31:21.566383  181296 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 09:31:21.566465  181296 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 09:31:21.570482  181296 start.go:564] Will wait 60s for crictl version
	I1029 09:31:21.570545  181296 ssh_runner.go:195] Run: which crictl
	I1029 09:31:21.574427  181296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 09:31:21.599457  181296 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 09:31:21.599540  181296 ssh_runner.go:195] Run: crio --version
	I1029 09:31:21.628407  181296 ssh_runner.go:195] Run: crio --version
	I1029 09:31:21.666027  181296 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1029 09:31:21.668848  181296 cli_runner.go:164] Run: docker network inspect old-k8s-version-162751 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:31:21.685717  181296 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1029 09:31:21.689651  181296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:31:21.699374  181296 kubeadm.go:884] updating cluster {Name:old-k8s-version-162751 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-162751 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 09:31:21.699484  181296 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1029 09:31:21.699537  181296 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:31:21.733693  181296 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:31:21.733713  181296 crio.go:433] Images already preloaded, skipping extraction
	I1029 09:31:21.733768  181296 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:31:21.762339  181296 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:31:21.762433  181296 cache_images.go:86] Images are preloaded, skipping loading
	I1029 09:31:21.762457  181296 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1029 09:31:21.762591  181296 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-162751 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-162751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 09:31:21.762715  181296 ssh_runner.go:195] Run: crio config
	I1029 09:31:21.842232  181296 cni.go:84] Creating CNI manager for ""
	I1029 09:31:21.842295  181296 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:31:21.842337  181296 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1029 09:31:21.842394  181296 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-162751 NodeName:old-k8s-version-162751 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 09:31:21.842560  181296 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-162751"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 09:31:21.842644  181296 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1029 09:31:21.850312  181296 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 09:31:21.850421  181296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 09:31:21.858147  181296 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1029 09:31:21.870931  181296 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 09:31:21.884032  181296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1029 09:31:21.897161  181296 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1029 09:31:21.900788  181296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:31:21.910312  181296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:31:22.037417  181296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:31:22.055586  181296 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751 for IP: 192.168.76.2
	I1029 09:31:22.055668  181296 certs.go:195] generating shared ca certs ...
	I1029 09:31:22.055706  181296 certs.go:227] acquiring lock for ca certs: {Name:mk715611293338c39436fe9072c5ce0c2fb993a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:31:22.055902  181296 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key
	I1029 09:31:22.055994  181296 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key
	I1029 09:31:22.056036  181296 certs.go:257] generating profile certs ...
	I1029 09:31:22.056115  181296 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/client.key
	I1029 09:31:22.056155  181296 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/client.crt with IP's: []
	I1029 09:31:22.330114  181296 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/client.crt ...
	I1029 09:31:22.330146  181296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/client.crt: {Name:mk548187983b6c42385082f329d795f7dfd502d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:31:22.330350  181296 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/client.key ...
	I1029 09:31:22.330366  181296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/client.key: {Name:mk0a108728d3669c22683b9a34aa6ff145635278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:31:22.330464  181296 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/apiserver.key.aeb67784
	I1029 09:31:22.330485  181296 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/apiserver.crt.aeb67784 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1029 09:31:23.024421  181296 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/apiserver.crt.aeb67784 ...
	I1029 09:31:23.024455  181296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/apiserver.crt.aeb67784: {Name:mkbce234bca7309b9f1569dd1a1202e4c899afe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:31:23.024639  181296 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/apiserver.key.aeb67784 ...
	I1029 09:31:23.024658  181296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/apiserver.key.aeb67784: {Name:mke728d7bdb42cec066816c93c2548ae9933e5de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:31:23.024747  181296 certs.go:382] copying /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/apiserver.crt.aeb67784 -> /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/apiserver.crt
	I1029 09:31:23.024829  181296 certs.go:386] copying /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/apiserver.key.aeb67784 -> /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/apiserver.key
	I1029 09:31:23.024905  181296 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/proxy-client.key
	I1029 09:31:23.024923  181296 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/proxy-client.crt with IP's: []
	I1029 09:31:23.351916  181296 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/proxy-client.crt ...
	I1029 09:31:23.351947  181296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/proxy-client.crt: {Name:mk45fdcdb97fca6f1b407ba1087531eb75e81306 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:31:23.352127  181296 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/proxy-client.key ...
	I1029 09:31:23.352144  181296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/proxy-client.key: {Name:mk76a02be1f06d640b69f17caf858d5e10e2e815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:31:23.352374  181296 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem (1338 bytes)
	W1029 09:31:23.352419  181296 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550_empty.pem, impossibly tiny 0 bytes
	I1029 09:31:23.352441  181296 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem (1679 bytes)
	I1029 09:31:23.352466  181296 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem (1082 bytes)
	I1029 09:31:23.352499  181296 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem (1123 bytes)
	I1029 09:31:23.352530  181296 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem (1679 bytes)
	I1029 09:31:23.352576  181296 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem (1708 bytes)
	I1029 09:31:23.353155  181296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 09:31:23.372992  181296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 09:31:23.393341  181296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 09:31:23.412841  181296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1029 09:31:23.432046  181296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1029 09:31:23.450875  181296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1029 09:31:23.469115  181296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 09:31:23.491028  181296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1029 09:31:23.512389  181296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 09:31:23.534641  181296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem --> /usr/share/ca-certificates/4550.pem (1338 bytes)
	I1029 09:31:23.553668  181296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /usr/share/ca-certificates/45502.pem (1708 bytes)
	I1029 09:31:23.571719  181296 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 09:31:23.587084  181296 ssh_runner.go:195] Run: openssl version
	I1029 09:31:23.594140  181296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 09:31:23.610461  181296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:31:23.615093  181296 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:31:23.615209  181296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:31:23.658857  181296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 09:31:23.667646  181296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4550.pem && ln -fs /usr/share/ca-certificates/4550.pem /etc/ssl/certs/4550.pem"
	I1029 09:31:23.675847  181296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4550.pem
	I1029 09:31:23.680133  181296 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:27 /usr/share/ca-certificates/4550.pem
	I1029 09:31:23.680250  181296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4550.pem
	I1029 09:31:23.723847  181296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4550.pem /etc/ssl/certs/51391683.0"
	I1029 09:31:23.732081  181296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/45502.pem && ln -fs /usr/share/ca-certificates/45502.pem /etc/ssl/certs/45502.pem"
	I1029 09:31:23.740786  181296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/45502.pem
	I1029 09:31:23.744940  181296 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:27 /usr/share/ca-certificates/45502.pem
	I1029 09:31:23.745021  181296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/45502.pem
	I1029 09:31:23.787016  181296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/45502.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 09:31:23.796210  181296 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 09:31:23.799778  181296 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1029 09:31:23.799838  181296 kubeadm.go:401] StartCluster: {Name:old-k8s-version-162751 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-162751 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:31:23.799926  181296 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 09:31:23.799986  181296 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 09:31:23.827995  181296 cri.go:89] found id: ""
	I1029 09:31:23.828082  181296 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 09:31:23.836011  181296 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1029 09:31:23.843983  181296 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1029 09:31:23.844049  181296 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1029 09:31:23.852216  181296 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1029 09:31:23.852241  181296 kubeadm.go:158] found existing configuration files:
	
	I1029 09:31:23.852343  181296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1029 09:31:23.860535  181296 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1029 09:31:23.860611  181296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1029 09:31:23.868490  181296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1029 09:31:23.878527  181296 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1029 09:31:23.878619  181296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1029 09:31:23.886317  181296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1029 09:31:23.894191  181296 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1029 09:31:23.894256  181296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1029 09:31:23.901939  181296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1029 09:31:23.910244  181296 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1029 09:31:23.910315  181296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1029 09:31:23.918102  181296 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1029 09:31:23.965973  181296 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1029 09:31:23.966038  181296 kubeadm.go:319] [preflight] Running pre-flight checks
	I1029 09:31:24.010373  181296 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1029 09:31:24.010457  181296 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1029 09:31:24.010498  181296 kubeadm.go:319] OS: Linux
	I1029 09:31:24.010562  181296 kubeadm.go:319] CGROUPS_CPU: enabled
	I1029 09:31:24.010621  181296 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1029 09:31:24.010676  181296 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1029 09:31:24.010734  181296 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1029 09:31:24.010789  181296 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1029 09:31:24.010854  181296 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1029 09:31:24.010906  181296 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1029 09:31:24.010962  181296 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1029 09:31:24.011016  181296 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1029 09:31:24.099733  181296 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1029 09:31:24.099898  181296 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1029 09:31:24.100019  181296 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1029 09:31:24.256084  181296 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1029 09:31:24.262342  181296 out.go:252]   - Generating certificates and keys ...
	I1029 09:31:24.262446  181296 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1029 09:31:24.262521  181296 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1029 09:31:25.159879  181296 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1029 09:31:25.803055  181296 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1029 09:31:26.063852  181296 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1029 09:31:27.180710  181296 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1029 09:31:28.042983  181296 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1029 09:31:28.043158  181296 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-162751] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1029 09:31:28.725633  181296 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1029 09:31:28.725911  181296 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-162751] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1029 09:31:28.997360  181296 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1029 09:31:29.272870  181296 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1029 09:31:29.966819  181296 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1029 09:31:29.967128  181296 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1029 09:31:30.118430  181296 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1029 09:31:30.506335  181296 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1029 09:31:30.722968  181296 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1029 09:31:31.371561  181296 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1029 09:31:31.372770  181296 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1029 09:31:31.375642  181296 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1029 09:31:31.379030  181296 out.go:252]   - Booting up control plane ...
	I1029 09:31:31.379212  181296 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1029 09:31:31.379309  181296 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1029 09:31:31.380820  181296 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1029 09:31:31.406961  181296 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1029 09:31:31.407072  181296 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1029 09:31:31.407120  181296 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1029 09:31:31.545977  181296 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1029 09:31:40.048300  181296 kubeadm.go:319] [apiclient] All control plane components are healthy after 8.502419 seconds
	I1029 09:31:40.048456  181296 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1029 09:31:40.065251  181296 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1029 09:31:40.599627  181296 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1029 09:31:40.599872  181296 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-162751 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1029 09:31:41.112871  181296 kubeadm.go:319] [bootstrap-token] Using token: 38mgnc.qhz8gkl1zyv5mfa9
	I1029 09:31:41.115850  181296 out.go:252]   - Configuring RBAC rules ...
	I1029 09:31:41.115986  181296 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1029 09:31:41.121282  181296 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1029 09:31:41.133878  181296 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1029 09:31:41.138998  181296 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1029 09:31:41.143581  181296 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1029 09:31:41.148726  181296 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1029 09:31:41.165734  181296 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1029 09:31:41.484762  181296 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1029 09:31:41.535704  181296 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1029 09:31:41.537349  181296 kubeadm.go:319] 
	I1029 09:31:41.537475  181296 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1029 09:31:41.537483  181296 kubeadm.go:319] 
	I1029 09:31:41.537582  181296 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1029 09:31:41.537588  181296 kubeadm.go:319] 
	I1029 09:31:41.537614  181296 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1029 09:31:41.537913  181296 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1029 09:31:41.537973  181296 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1029 09:31:41.537978  181296 kubeadm.go:319] 
	I1029 09:31:41.538053  181296 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1029 09:31:41.538059  181296 kubeadm.go:319] 
	I1029 09:31:41.538115  181296 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1029 09:31:41.538120  181296 kubeadm.go:319] 
	I1029 09:31:41.538183  181296 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1029 09:31:41.538292  181296 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1029 09:31:41.538391  181296 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1029 09:31:41.538397  181296 kubeadm.go:319] 
	I1029 09:31:41.538500  181296 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1029 09:31:41.538601  181296 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1029 09:31:41.538609  181296 kubeadm.go:319] 
	I1029 09:31:41.538732  181296 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 38mgnc.qhz8gkl1zyv5mfa9 \
	I1029 09:31:41.538868  181296 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da4a5b90580f0f492e24f667f5676cec258425f736b389045aee440db981859e \
	I1029 09:31:41.538895  181296 kubeadm.go:319] 	--control-plane 
	I1029 09:31:41.538900  181296 kubeadm.go:319] 
	I1029 09:31:41.539021  181296 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1029 09:31:41.539026  181296 kubeadm.go:319] 
	I1029 09:31:41.539128  181296 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 38mgnc.qhz8gkl1zyv5mfa9 \
	I1029 09:31:41.539259  181296 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da4a5b90580f0f492e24f667f5676cec258425f736b389045aee440db981859e 
	I1029 09:31:41.542156  181296 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1029 09:31:41.542317  181296 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1029 09:31:41.542345  181296 cni.go:84] Creating CNI manager for ""
	I1029 09:31:41.542355  181296 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:31:41.547600  181296 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1029 09:31:41.550355  181296 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1029 09:31:41.555758  181296 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1029 09:31:41.555784  181296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1029 09:31:41.570638  181296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1029 09:31:42.691717  181296 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.121037908s)
	I1029 09:31:42.691757  181296 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1029 09:31:42.691889  181296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:31:42.691962  181296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-162751 minikube.k8s.io/updated_at=2025_10_29T09_31_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac minikube.k8s.io/name=old-k8s-version-162751 minikube.k8s.io/primary=true
	I1029 09:31:42.846227  181296 ops.go:34] apiserver oom_adj: -16
	I1029 09:31:42.846337  181296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:31:43.346419  181296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:31:43.847332  181296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:31:44.347402  181296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:31:44.847339  181296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:31:45.347180  181296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:31:45.846745  181296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:31:46.346806  181296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:31:46.847268  181296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:31:47.347013  181296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:31:47.847035  181296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:31:48.346515  181296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:31:48.846539  181296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:31:49.346431  181296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:31:49.847066  181296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:31:50.346587  181296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:31:50.846861  181296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:31:51.347063  181296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:31:51.847046  181296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:31:52.346605  181296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:31:52.846831  181296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:31:53.347075  181296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:31:53.847094  181296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:31:54.347001  181296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:31:54.466045  181296 kubeadm.go:1114] duration metric: took 11.774197458s to wait for elevateKubeSystemPrivileges
	I1029 09:31:54.466075  181296 kubeadm.go:403] duration metric: took 30.666239927s to StartCluster
	I1029 09:31:54.466091  181296 settings.go:142] acquiring lock: {Name:mkeba1c0d8e3656c2a05b6b6f1f81184498df216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:31:54.466146  181296 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:31:54.467115  181296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:31:54.467307  181296 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:31:54.467449  181296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1029 09:31:54.467680  181296 config.go:182] Loaded profile config "old-k8s-version-162751": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1029 09:31:54.467711  181296 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 09:31:54.467767  181296 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-162751"
	I1029 09:31:54.467782  181296 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-162751"
	I1029 09:31:54.467813  181296 host.go:66] Checking if "old-k8s-version-162751" exists ...
	I1029 09:31:54.468273  181296 cli_runner.go:164] Run: docker container inspect old-k8s-version-162751 --format={{.State.Status}}
	I1029 09:31:54.468860  181296 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-162751"
	I1029 09:31:54.468879  181296 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-162751"
	I1029 09:31:54.469137  181296 cli_runner.go:164] Run: docker container inspect old-k8s-version-162751 --format={{.State.Status}}
	I1029 09:31:54.485524  181296 out.go:179] * Verifying Kubernetes components...
	I1029 09:31:54.489135  181296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:31:54.525940  181296 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-162751"
	I1029 09:31:54.525979  181296 host.go:66] Checking if "old-k8s-version-162751" exists ...
	I1029 09:31:54.526405  181296 cli_runner.go:164] Run: docker container inspect old-k8s-version-162751 --format={{.State.Status}}
	I1029 09:31:54.526599  181296 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 09:31:54.529737  181296 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:31:54.529758  181296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1029 09:31:54.529826  181296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-162751
	I1029 09:31:54.560546  181296 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1029 09:31:54.560565  181296 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1029 09:31:54.560620  181296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-162751
	I1029 09:31:54.564739  181296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/old-k8s-version-162751/id_rsa Username:docker}
	I1029 09:31:54.595425  181296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/old-k8s-version-162751/id_rsa Username:docker}
	I1029 09:31:54.840851  181296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1029 09:31:54.841009  181296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:31:54.906913  181296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:31:54.908680  181296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1029 09:31:55.890729  181296 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.04960286s)
	I1029 09:31:55.891163  181296 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.050125531s)
	I1029 09:31:55.891184  181296 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1029 09:31:55.891860  181296 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-162751" to be "Ready" ...
	I1029 09:31:56.248081  181296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.341091735s)
	I1029 09:31:56.248148  181296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.339357394s)
	I1029 09:31:56.261868  181296 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1029 09:31:56.264808  181296 addons.go:515] duration metric: took 1.797076974s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1029 09:31:56.396638  181296 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-162751" context rescaled to 1 replicas
	W1029 09:31:57.896749  181296 node_ready.go:57] node "old-k8s-version-162751" has "Ready":"False" status (will retry)
	W1029 09:32:00.400664  181296 node_ready.go:57] node "old-k8s-version-162751" has "Ready":"False" status (will retry)
	W1029 09:32:02.895337  181296 node_ready.go:57] node "old-k8s-version-162751" has "Ready":"False" status (will retry)
	W1029 09:32:04.895976  181296 node_ready.go:57] node "old-k8s-version-162751" has "Ready":"False" status (will retry)
	W1029 09:32:06.907755  181296 node_ready.go:57] node "old-k8s-version-162751" has "Ready":"False" status (will retry)
	I1029 09:32:08.395525  181296 node_ready.go:49] node "old-k8s-version-162751" is "Ready"
	I1029 09:32:08.395554  181296 node_ready.go:38] duration metric: took 12.503664583s for node "old-k8s-version-162751" to be "Ready" ...
	I1029 09:32:08.395568  181296 api_server.go:52] waiting for apiserver process to appear ...
	I1029 09:32:08.395647  181296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:32:08.407754  181296 api_server.go:72] duration metric: took 13.940419796s to wait for apiserver process to appear ...
	I1029 09:32:08.407783  181296 api_server.go:88] waiting for apiserver healthz status ...
	I1029 09:32:08.407802  181296 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1029 09:32:08.416940  181296 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1029 09:32:08.418311  181296 api_server.go:141] control plane version: v1.28.0
	I1029 09:32:08.418337  181296 api_server.go:131] duration metric: took 10.54626ms to wait for apiserver health ...
	I1029 09:32:08.418346  181296 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 09:32:08.422260  181296 system_pods.go:59] 8 kube-system pods found
	I1029 09:32:08.422299  181296 system_pods.go:61] "coredns-5dd5756b68-dq48g" [9e3006d1-0dd2-4238-b886-f3226c1afced] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:32:08.422306  181296 system_pods.go:61] "etcd-old-k8s-version-162751" [f470f3dd-8369-415a-9c62-97971181042a] Running
	I1029 09:32:08.422312  181296 system_pods.go:61] "kindnet-2dggr" [fe1be08e-8f24-43dc-8c0c-3ba27c39174b] Running
	I1029 09:32:08.422317  181296 system_pods.go:61] "kube-apiserver-old-k8s-version-162751" [c6b84f51-7d6d-47e0-8be4-345ebc1911bd] Running
	I1029 09:32:08.422322  181296 system_pods.go:61] "kube-controller-manager-old-k8s-version-162751" [8d67da26-9f6f-4ede-acea-348255d53cec] Running
	I1029 09:32:08.422327  181296 system_pods.go:61] "kube-proxy-zvr7g" [064eda3d-d921-408a-a5fe-e6c64f0ccb0b] Running
	I1029 09:32:08.422332  181296 system_pods.go:61] "kube-scheduler-old-k8s-version-162751" [9a73e529-d190-4a21-974a-c865aeabe280] Running
	I1029 09:32:08.422339  181296 system_pods.go:61] "storage-provisioner" [ea29eea0-c9cd-40b5-8c5c-12469da20364] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:32:08.422349  181296 system_pods.go:74] duration metric: took 3.99712ms to wait for pod list to return data ...
	I1029 09:32:08.422361  181296 default_sa.go:34] waiting for default service account to be created ...
	I1029 09:32:08.424723  181296 default_sa.go:45] found service account: "default"
	I1029 09:32:08.424748  181296 default_sa.go:55] duration metric: took 2.381082ms for default service account to be created ...
	I1029 09:32:08.424757  181296 system_pods.go:116] waiting for k8s-apps to be running ...
	I1029 09:32:08.428399  181296 system_pods.go:86] 8 kube-system pods found
	I1029 09:32:08.428430  181296 system_pods.go:89] "coredns-5dd5756b68-dq48g" [9e3006d1-0dd2-4238-b886-f3226c1afced] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:32:08.428437  181296 system_pods.go:89] "etcd-old-k8s-version-162751" [f470f3dd-8369-415a-9c62-97971181042a] Running
	I1029 09:32:08.428444  181296 system_pods.go:89] "kindnet-2dggr" [fe1be08e-8f24-43dc-8c0c-3ba27c39174b] Running
	I1029 09:32:08.428449  181296 system_pods.go:89] "kube-apiserver-old-k8s-version-162751" [c6b84f51-7d6d-47e0-8be4-345ebc1911bd] Running
	I1029 09:32:08.428454  181296 system_pods.go:89] "kube-controller-manager-old-k8s-version-162751" [8d67da26-9f6f-4ede-acea-348255d53cec] Running
	I1029 09:32:08.428458  181296 system_pods.go:89] "kube-proxy-zvr7g" [064eda3d-d921-408a-a5fe-e6c64f0ccb0b] Running
	I1029 09:32:08.428463  181296 system_pods.go:89] "kube-scheduler-old-k8s-version-162751" [9a73e529-d190-4a21-974a-c865aeabe280] Running
	I1029 09:32:08.428469  181296 system_pods.go:89] "storage-provisioner" [ea29eea0-c9cd-40b5-8c5c-12469da20364] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:32:08.428488  181296 retry.go:31] will retry after 293.213021ms: missing components: kube-dns
	I1029 09:32:08.727800  181296 system_pods.go:86] 8 kube-system pods found
	I1029 09:32:08.727839  181296 system_pods.go:89] "coredns-5dd5756b68-dq48g" [9e3006d1-0dd2-4238-b886-f3226c1afced] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:32:08.727847  181296 system_pods.go:89] "etcd-old-k8s-version-162751" [f470f3dd-8369-415a-9c62-97971181042a] Running
	I1029 09:32:08.727853  181296 system_pods.go:89] "kindnet-2dggr" [fe1be08e-8f24-43dc-8c0c-3ba27c39174b] Running
	I1029 09:32:08.727890  181296 system_pods.go:89] "kube-apiserver-old-k8s-version-162751" [c6b84f51-7d6d-47e0-8be4-345ebc1911bd] Running
	I1029 09:32:08.727903  181296 system_pods.go:89] "kube-controller-manager-old-k8s-version-162751" [8d67da26-9f6f-4ede-acea-348255d53cec] Running
	I1029 09:32:08.727907  181296 system_pods.go:89] "kube-proxy-zvr7g" [064eda3d-d921-408a-a5fe-e6c64f0ccb0b] Running
	I1029 09:32:08.727911  181296 system_pods.go:89] "kube-scheduler-old-k8s-version-162751" [9a73e529-d190-4a21-974a-c865aeabe280] Running
	I1029 09:32:08.727917  181296 system_pods.go:89] "storage-provisioner" [ea29eea0-c9cd-40b5-8c5c-12469da20364] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:32:08.727937  181296 retry.go:31] will retry after 267.965797ms: missing components: kube-dns
	I1029 09:32:09.028615  181296 system_pods.go:86] 8 kube-system pods found
	I1029 09:32:09.028662  181296 system_pods.go:89] "coredns-5dd5756b68-dq48g" [9e3006d1-0dd2-4238-b886-f3226c1afced] Running
	I1029 09:32:09.028669  181296 system_pods.go:89] "etcd-old-k8s-version-162751" [f470f3dd-8369-415a-9c62-97971181042a] Running
	I1029 09:32:09.028693  181296 system_pods.go:89] "kindnet-2dggr" [fe1be08e-8f24-43dc-8c0c-3ba27c39174b] Running
	I1029 09:32:09.028698  181296 system_pods.go:89] "kube-apiserver-old-k8s-version-162751" [c6b84f51-7d6d-47e0-8be4-345ebc1911bd] Running
	I1029 09:32:09.028704  181296 system_pods.go:89] "kube-controller-manager-old-k8s-version-162751" [8d67da26-9f6f-4ede-acea-348255d53cec] Running
	I1029 09:32:09.028707  181296 system_pods.go:89] "kube-proxy-zvr7g" [064eda3d-d921-408a-a5fe-e6c64f0ccb0b] Running
	I1029 09:32:09.028714  181296 system_pods.go:89] "kube-scheduler-old-k8s-version-162751" [9a73e529-d190-4a21-974a-c865aeabe280] Running
	I1029 09:32:09.028723  181296 system_pods.go:89] "storage-provisioner" [ea29eea0-c9cd-40b5-8c5c-12469da20364] Running
	I1029 09:32:09.028748  181296 system_pods.go:126] duration metric: took 603.986099ms to wait for k8s-apps to be running ...
	I1029 09:32:09.028761  181296 system_svc.go:44] waiting for kubelet service to be running ....
	I1029 09:32:09.028817  181296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:32:09.048884  181296 system_svc.go:56] duration metric: took 20.114085ms WaitForService to wait for kubelet
	I1029 09:32:09.048915  181296 kubeadm.go:587] duration metric: took 14.581585407s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:32:09.048936  181296 node_conditions.go:102] verifying NodePressure condition ...
	I1029 09:32:09.054296  181296 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1029 09:32:09.054327  181296 node_conditions.go:123] node cpu capacity is 2
	I1029 09:32:09.054341  181296 node_conditions.go:105] duration metric: took 5.39892ms to run NodePressure ...
	I1029 09:32:09.054353  181296 start.go:242] waiting for startup goroutines ...
	I1029 09:32:09.054363  181296 start.go:247] waiting for cluster config update ...
	I1029 09:32:09.054378  181296 start.go:256] writing updated cluster config ...
	I1029 09:32:09.054669  181296 ssh_runner.go:195] Run: rm -f paused
	I1029 09:32:09.058824  181296 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:32:09.066912  181296 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-dq48g" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:32:09.083181  181296 pod_ready.go:94] pod "coredns-5dd5756b68-dq48g" is "Ready"
	I1029 09:32:09.083208  181296 pod_ready.go:86] duration metric: took 16.273077ms for pod "coredns-5dd5756b68-dq48g" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:32:09.086926  181296 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-162751" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:32:09.095463  181296 pod_ready.go:94] pod "etcd-old-k8s-version-162751" is "Ready"
	I1029 09:32:09.095491  181296 pod_ready.go:86] duration metric: took 8.538285ms for pod "etcd-old-k8s-version-162751" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:32:09.098853  181296 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-162751" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:32:09.104355  181296 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-162751" is "Ready"
	I1029 09:32:09.104381  181296 pod_ready.go:86] duration metric: took 5.502428ms for pod "kube-apiserver-old-k8s-version-162751" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:32:09.107747  181296 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-162751" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:32:09.463506  181296 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-162751" is "Ready"
	I1029 09:32:09.463537  181296 pod_ready.go:86] duration metric: took 355.76526ms for pod "kube-controller-manager-old-k8s-version-162751" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:32:09.663350  181296 pod_ready.go:83] waiting for pod "kube-proxy-zvr7g" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:32:10.063603  181296 pod_ready.go:94] pod "kube-proxy-zvr7g" is "Ready"
	I1029 09:32:10.063631  181296 pod_ready.go:86] duration metric: took 400.248979ms for pod "kube-proxy-zvr7g" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:32:10.263381  181296 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-162751" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:32:10.663221  181296 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-162751" is "Ready"
	I1029 09:32:10.663250  181296 pod_ready.go:86] duration metric: took 399.841108ms for pod "kube-scheduler-old-k8s-version-162751" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:32:10.663263  181296 pod_ready.go:40] duration metric: took 1.604406285s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:32:10.723881  181296 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1029 09:32:10.727046  181296 out.go:203] 
	W1029 09:32:10.730035  181296 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1029 09:32:10.733123  181296 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1029 09:32:10.736788  181296 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-162751" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 29 09:32:08 old-k8s-version-162751 crio[837]: time="2025-10-29T09:32:08.575646048Z" level=info msg="Created container 7a749f0e54ee3f8cabfa2c1d072ca477736d2bbad47b4c04f28f4d68bd670000: kube-system/coredns-5dd5756b68-dq48g/coredns" id=b2787ed1-1f69-45d8-a238-2c71e38a4773 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:32:08 old-k8s-version-162751 crio[837]: time="2025-10-29T09:32:08.576758122Z" level=info msg="Starting container: 7a749f0e54ee3f8cabfa2c1d072ca477736d2bbad47b4c04f28f4d68bd670000" id=cdb55ac6-cea4-4412-92ff-b58f08c99a02 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:32:08 old-k8s-version-162751 crio[837]: time="2025-10-29T09:32:08.580897389Z" level=info msg="Started container" PID=1949 containerID=7a749f0e54ee3f8cabfa2c1d072ca477736d2bbad47b4c04f28f4d68bd670000 description=kube-system/coredns-5dd5756b68-dq48g/coredns id=cdb55ac6-cea4-4412-92ff-b58f08c99a02 name=/runtime.v1.RuntimeService/StartContainer sandboxID=93770248960cb2ef93ab2e01f7a7775a50fb4a48224d56945e840c0f13499a24
	Oct 29 09:32:11 old-k8s-version-162751 crio[837]: time="2025-10-29T09:32:11.267061998Z" level=info msg="Running pod sandbox: default/busybox/POD" id=0a1e4eb7-4efc-4d68-9678-921598b41bfc name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:32:11 old-k8s-version-162751 crio[837]: time="2025-10-29T09:32:11.267149244Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:32:11 old-k8s-version-162751 crio[837]: time="2025-10-29T09:32:11.272526191Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:90de8350b7931624017d79b22d9a43b2ad9a0e613d979b361456042f04b43439 UID:1c908328-618a-4e6e-a19d-9960059ef8a7 NetNS:/var/run/netns/90c355ee-9064-4b59-970c-355b9de0f931 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400137c480}] Aliases:map[]}"
	Oct 29 09:32:11 old-k8s-version-162751 crio[837]: time="2025-10-29T09:32:11.272562007Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 29 09:32:11 old-k8s-version-162751 crio[837]: time="2025-10-29T09:32:11.284469Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:90de8350b7931624017d79b22d9a43b2ad9a0e613d979b361456042f04b43439 UID:1c908328-618a-4e6e-a19d-9960059ef8a7 NetNS:/var/run/netns/90c355ee-9064-4b59-970c-355b9de0f931 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400137c480}] Aliases:map[]}"
	Oct 29 09:32:11 old-k8s-version-162751 crio[837]: time="2025-10-29T09:32:11.284631701Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 29 09:32:11 old-k8s-version-162751 crio[837]: time="2025-10-29T09:32:11.289062695Z" level=info msg="Ran pod sandbox 90de8350b7931624017d79b22d9a43b2ad9a0e613d979b361456042f04b43439 with infra container: default/busybox/POD" id=0a1e4eb7-4efc-4d68-9678-921598b41bfc name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:32:11 old-k8s-version-162751 crio[837]: time="2025-10-29T09:32:11.290124103Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1e602610-f904-4eda-b3ca-572652d4c8ff name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:32:11 old-k8s-version-162751 crio[837]: time="2025-10-29T09:32:11.290361167Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=1e602610-f904-4eda-b3ca-572652d4c8ff name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:32:11 old-k8s-version-162751 crio[837]: time="2025-10-29T09:32:11.290409127Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=1e602610-f904-4eda-b3ca-572652d4c8ff name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:32:11 old-k8s-version-162751 crio[837]: time="2025-10-29T09:32:11.293060972Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4a6f73b0-09ae-42e5-aaef-b64ffbf6ffbd name=/runtime.v1.ImageService/PullImage
	Oct 29 09:32:11 old-k8s-version-162751 crio[837]: time="2025-10-29T09:32:11.295507769Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 29 09:32:13 old-k8s-version-162751 crio[837]: time="2025-10-29T09:32:13.516826025Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=4a6f73b0-09ae-42e5-aaef-b64ffbf6ffbd name=/runtime.v1.ImageService/PullImage
	Oct 29 09:32:13 old-k8s-version-162751 crio[837]: time="2025-10-29T09:32:13.517869947Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f19acef7-932a-47a4-8199-e4a2d3c86cf7 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:32:13 old-k8s-version-162751 crio[837]: time="2025-10-29T09:32:13.520587557Z" level=info msg="Creating container: default/busybox/busybox" id=051ee22e-bbff-4026-9aef-9d40c7c7e2ae name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:32:13 old-k8s-version-162751 crio[837]: time="2025-10-29T09:32:13.520710381Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:32:13 old-k8s-version-162751 crio[837]: time="2025-10-29T09:32:13.526539909Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:32:13 old-k8s-version-162751 crio[837]: time="2025-10-29T09:32:13.527251308Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:32:13 old-k8s-version-162751 crio[837]: time="2025-10-29T09:32:13.544464766Z" level=info msg="Created container ec19c489ba4b46cd83cc7d70b667fbe85964257fb3ea1b63b3fe77ac071ec6d2: default/busybox/busybox" id=051ee22e-bbff-4026-9aef-9d40c7c7e2ae name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:32:13 old-k8s-version-162751 crio[837]: time="2025-10-29T09:32:13.545285564Z" level=info msg="Starting container: ec19c489ba4b46cd83cc7d70b667fbe85964257fb3ea1b63b3fe77ac071ec6d2" id=b22b3c1f-fb8d-40d8-904d-8be0a2f8bbc4 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:32:13 old-k8s-version-162751 crio[837]: time="2025-10-29T09:32:13.547937352Z" level=info msg="Started container" PID=2001 containerID=ec19c489ba4b46cd83cc7d70b667fbe85964257fb3ea1b63b3fe77ac071ec6d2 description=default/busybox/busybox id=b22b3c1f-fb8d-40d8-904d-8be0a2f8bbc4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=90de8350b7931624017d79b22d9a43b2ad9a0e613d979b361456042f04b43439
	Oct 29 09:32:19 old-k8s-version-162751 crio[837]: time="2025-10-29T09:32:19.093260101Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	ec19c489ba4b4       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   6 seconds ago       Running             busybox                   0                   90de8350b7931       busybox                                          default
	7a749f0e54ee3       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      11 seconds ago      Running             coredns                   0                   93770248960cb       coredns-5dd5756b68-dq48g                         kube-system
	fbc07829a1fb6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago      Running             storage-provisioner       0                   d1116f4457d69       storage-provisioner                              kube-system
	5b6fc9ec2b55e       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    22 seconds ago      Running             kindnet-cni               0                   112fca0ae97c2       kindnet-2dggr                                    kube-system
	4b8e301c54575       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      26 seconds ago      Running             kube-proxy                0                   c4e9a73276522       kube-proxy-zvr7g                                 kube-system
	62b0c4bdae8c5       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      46 seconds ago      Running             etcd                      0                   454ef0944a190       etcd-old-k8s-version-162751                      kube-system
	6eb7be63df6f6       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      46 seconds ago      Running             kube-controller-manager   0                   261a144c1d3e5       kube-controller-manager-old-k8s-version-162751   kube-system
	8d81316a778a7       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      46 seconds ago      Running             kube-apiserver            0                   fb1e83954b436       kube-apiserver-old-k8s-version-162751            kube-system
	dca5a4b1588bc       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      46 seconds ago      Running             kube-scheduler            0                   41ed6065df26a       kube-scheduler-old-k8s-version-162751            kube-system
	
	
	==> coredns [7a749f0e54ee3f8cabfa2c1d072ca477736d2bbad47b4c04f28f4d68bd670000] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54508 - 10851 "HINFO IN 3844091297631526400.590122808188584726. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.023476951s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-162751
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-162751
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=old-k8s-version-162751
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_31_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:31:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-162751
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:32:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:32:12 +0000   Wed, 29 Oct 2025 09:31:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:32:12 +0000   Wed, 29 Oct 2025 09:31:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:32:12 +0000   Wed, 29 Oct 2025 09:31:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 09:32:12 +0000   Wed, 29 Oct 2025 09:32:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-162751
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                fe615db9-32dc-431b-8163-4556fb5b38ef
	  Boot ID:                    dcb5a4aa-84b0-4edf-aeb7-a96ccf6c0882
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-dq48g                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     26s
	  kube-system                 etcd-old-k8s-version-162751                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         38s
	  kube-system                 kindnet-2dggr                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-old-k8s-version-162751             250m (12%)    0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-controller-manager-old-k8s-version-162751    200m (10%)    0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-zvr7g                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-old-k8s-version-162751             100m (5%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 39s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s   kubelet          Node old-k8s-version-162751 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s   kubelet          Node old-k8s-version-162751 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s   kubelet          Node old-k8s-version-162751 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node old-k8s-version-162751 event: Registered Node old-k8s-version-162751 in Controller
	  Normal  NodeReady                12s   kubelet          Node old-k8s-version-162751 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct29 08:59] overlayfs: idmapped layers are currently not supported
	[Oct29 09:04] overlayfs: idmapped layers are currently not supported
	[Oct29 09:05] overlayfs: idmapped layers are currently not supported
	[Oct29 09:06] overlayfs: idmapped layers are currently not supported
	[Oct29 09:07] overlayfs: idmapped layers are currently not supported
	[Oct29 09:08] overlayfs: idmapped layers are currently not supported
	[Oct29 09:10] overlayfs: idmapped layers are currently not supported
	[ +24.018500] overlayfs: idmapped layers are currently not supported
	[  +4.070732] overlayfs: idmapped layers are currently not supported
	[Oct29 09:11] overlayfs: idmapped layers are currently not supported
	[ +18.424492] overlayfs: idmapped layers are currently not supported
	[  +4.342269] hrtimer: interrupt took 2289025 ns
	[Oct29 09:12] overlayfs: idmapped layers are currently not supported
	[Oct29 09:13] overlayfs: idmapped layers are currently not supported
	[Oct29 09:14] overlayfs: idmapped layers are currently not supported
	[Oct29 09:20] overlayfs: idmapped layers are currently not supported
	[Oct29 09:23] overlayfs: idmapped layers are currently not supported
	[Oct29 09:24] overlayfs: idmapped layers are currently not supported
	[ +30.917844] overlayfs: idmapped layers are currently not supported
	[Oct29 09:27] overlayfs: idmapped layers are currently not supported
	[Oct29 09:29] overlayfs: idmapped layers are currently not supported
	[Oct29 09:30] overlayfs: idmapped layers are currently not supported
	[  +5.608805] overlayfs: idmapped layers are currently not supported
	[ +37.422429] overlayfs: idmapped layers are currently not supported
	[Oct29 09:31] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [62b0c4bdae8c592bf603cc0108f2c311e27931987da0bcf22c3d3977e765aa0b] <==
	{"level":"info","ts":"2025-10-29T09:31:33.931459Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-29T09:31:33.931631Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-29T09:31:33.931666Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-29T09:31:33.931929Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-29T09:31:33.931951Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-29T09:31:33.932197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-29T09:31:33.932355Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-29T09:31:34.406047Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-29T09:31:34.406164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-29T09:31:34.40622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-10-29T09:31:34.406269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-10-29T09:31:34.406304Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-29T09:31:34.406356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-10-29T09:31:34.406394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-29T09:31:34.411821Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-29T09:31:34.415157Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-162751 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-29T09:31:34.41519Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-29T09:31:34.415353Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-29T09:31:34.415404Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-29T09:31:34.415446Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-29T09:31:34.416521Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-29T09:31:34.417183Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-29T09:31:34.422026Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-29T09:31:34.425762Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-29T09:31:34.425835Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 09:32:20 up  1:14,  0 user,  load average: 2.71, 3.54, 2.57
	Linux old-k8s-version-162751 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5b6fc9ec2b55ebfee7368757c9f8c09599a5e20da79dbe9930a216fbf52a908b] <==
	I1029 09:31:57.748117       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:31:57.748577       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1029 09:31:57.748715       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:31:57.748733       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:31:57.748742       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:31:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:31:57.949892       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:31:57.949960       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:31:57.949995       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:31:57.950754       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1029 09:31:58.150130       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 09:31:58.150225       1 metrics.go:72] Registering metrics
	I1029 09:31:58.150324       1 controller.go:711] "Syncing nftables rules"
	I1029 09:32:07.957241       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1029 09:32:07.957301       1 main.go:301] handling current node
	I1029 09:32:17.951689       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1029 09:32:17.951725       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8d81316a778a743d8e5ca697e7a90bf39e68fbb0608083ed4c189b86aee52b9a] <==
	I1029 09:31:38.334696       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1029 09:31:38.338874       1 controller.go:624] quota admission added evaluator for: namespaces
	I1029 09:31:38.393378       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1029 09:31:38.394395       1 aggregator.go:166] initial CRD sync complete...
	I1029 09:31:38.394470       1 autoregister_controller.go:141] Starting autoregister controller
	I1029 09:31:38.394498       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1029 09:31:38.394527       1 cache.go:39] Caches are synced for autoregister controller
	I1029 09:31:38.395159       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1029 09:31:38.402592       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1029 09:31:38.410271       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1029 09:31:38.997049       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1029 09:31:39.011189       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1029 09:31:39.011217       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:31:39.673438       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:31:39.726421       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 09:31:39.837750       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1029 09:31:39.846010       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1029 09:31:39.847149       1 controller.go:624] quota admission added evaluator for: endpoints
	I1029 09:31:39.854930       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1029 09:31:40.335322       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1029 09:31:41.466276       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1029 09:31:41.482786       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1029 09:31:41.497372       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1029 09:31:54.032614       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1029 09:31:54.092238       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [6eb7be63df6f669c5786ab0a599ee4b3dbf22c1ecdfa71309caf0ce301e3cc5e] <==
	I1029 09:31:54.090484       1 shared_informer.go:318] Caches are synced for resource quota
	I1029 09:31:54.094095       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-zvr7g"
	I1029 09:31:54.100504       1 shared_informer.go:318] Caches are synced for endpoint
	I1029 09:31:54.141199       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1029 09:31:54.143039       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1029 09:31:54.156470       1 shared_informer.go:318] Caches are synced for resource quota
	I1029 09:31:54.222079       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-6n6wf"
	I1029 09:31:54.235091       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-dq48g"
	I1029 09:31:54.253587       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="126.916099ms"
	I1029 09:31:54.265960       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.311542ms"
	I1029 09:31:54.266113       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.05µs"
	I1029 09:31:54.482754       1 shared_informer.go:318] Caches are synced for garbage collector
	I1029 09:31:54.535730       1 shared_informer.go:318] Caches are synced for garbage collector
	I1029 09:31:54.535792       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1029 09:31:55.946934       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1029 09:31:55.987924       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-6n6wf"
	I1029 09:31:56.030350       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="84.132047ms"
	I1029 09:31:56.055853       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="25.450873ms"
	I1029 09:31:56.055999       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="111.214µs"
	I1029 09:32:08.170212       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="101.637µs"
	I1029 09:32:08.183101       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.724µs"
	I1029 09:32:08.935423       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="60.283µs"
	I1029 09:32:08.986630       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1029 09:32:09.000687       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="25.262811ms"
	I1029 09:32:09.000873       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.984µs"
	
	
	==> kube-proxy [4b8e301c54575c05754b59f663b4728011a4b27213730c06dec3fd1beee156ca] <==
	I1029 09:31:54.763968       1 server_others.go:69] "Using iptables proxy"
	I1029 09:31:54.817373       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1029 09:31:54.855306       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:31:54.859757       1 server_others.go:152] "Using iptables Proxier"
	I1029 09:31:54.859798       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1029 09:31:54.859816       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1029 09:31:54.859842       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1029 09:31:54.860042       1 server.go:846] "Version info" version="v1.28.0"
	I1029 09:31:54.860052       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:31:54.861328       1 config.go:188] "Starting service config controller"
	I1029 09:31:54.861351       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1029 09:31:54.861370       1 config.go:97] "Starting endpoint slice config controller"
	I1029 09:31:54.861374       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1029 09:31:54.861801       1 config.go:315] "Starting node config controller"
	I1029 09:31:54.861807       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1029 09:31:54.963002       1 shared_informer.go:318] Caches are synced for node config
	I1029 09:31:54.963046       1 shared_informer.go:318] Caches are synced for service config
	I1029 09:31:54.963060       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [dca5a4b1588bc91c923dfb3c72444ef5a04f654e957f415210d3c55be00e5da0] <==
	W1029 09:31:38.652559       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1029 09:31:38.652613       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1029 09:31:38.652736       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1029 09:31:38.652823       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1029 09:31:38.660939       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1029 09:31:38.661046       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1029 09:31:38.664605       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1029 09:31:38.664641       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1029 09:31:38.664784       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1029 09:31:38.664796       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1029 09:31:38.664852       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1029 09:31:38.664865       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1029 09:31:38.664941       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1029 09:31:38.664951       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1029 09:31:38.665010       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1029 09:31:38.665028       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1029 09:31:38.665172       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1029 09:31:38.665189       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1029 09:31:38.665215       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1029 09:31:38.665225       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1029 09:31:38.665238       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1029 09:31:38.665247       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1029 09:31:39.686978       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1029 09:31:39.687016       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1029 09:31:42.644543       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 29 09:31:54 old-k8s-version-162751 kubelet[1369]: I1029 09:31:54.183738    1369 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krhp7\" (UniqueName: \"kubernetes.io/projected/064eda3d-d921-408a-a5fe-e6c64f0ccb0b-kube-api-access-krhp7\") pod \"kube-proxy-zvr7g\" (UID: \"064eda3d-d921-408a-a5fe-e6c64f0ccb0b\") " pod="kube-system/kube-proxy-zvr7g"
	Oct 29 09:31:54 old-k8s-version-162751 kubelet[1369]: I1029 09:31:54.183844    1369 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe1be08e-8f24-43dc-8c0c-3ba27c39174b-xtables-lock\") pod \"kindnet-2dggr\" (UID: \"fe1be08e-8f24-43dc-8c0c-3ba27c39174b\") " pod="kube-system/kindnet-2dggr"
	Oct 29 09:31:54 old-k8s-version-162751 kubelet[1369]: I1029 09:31:54.183940    1369 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/064eda3d-d921-408a-a5fe-e6c64f0ccb0b-lib-modules\") pod \"kube-proxy-zvr7g\" (UID: \"064eda3d-d921-408a-a5fe-e6c64f0ccb0b\") " pod="kube-system/kube-proxy-zvr7g"
	Oct 29 09:31:54 old-k8s-version-162751 kubelet[1369]: I1029 09:31:54.184046    1369 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/fe1be08e-8f24-43dc-8c0c-3ba27c39174b-cni-cfg\") pod \"kindnet-2dggr\" (UID: \"fe1be08e-8f24-43dc-8c0c-3ba27c39174b\") " pod="kube-system/kindnet-2dggr"
	Oct 29 09:31:54 old-k8s-version-162751 kubelet[1369]: I1029 09:31:54.184174    1369 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe1be08e-8f24-43dc-8c0c-3ba27c39174b-lib-modules\") pod \"kindnet-2dggr\" (UID: \"fe1be08e-8f24-43dc-8c0c-3ba27c39174b\") " pod="kube-system/kindnet-2dggr"
	Oct 29 09:31:54 old-k8s-version-162751 kubelet[1369]: I1029 09:31:54.184268    1369 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/064eda3d-d921-408a-a5fe-e6c64f0ccb0b-xtables-lock\") pod \"kube-proxy-zvr7g\" (UID: \"064eda3d-d921-408a-a5fe-e6c64f0ccb0b\") " pod="kube-system/kube-proxy-zvr7g"
	Oct 29 09:31:54 old-k8s-version-162751 kubelet[1369]: W1029 09:31:54.416764    1369 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ff565e88a53d8edb5070c5f93c9d1fefa68d9886105b4ed571d8aace0997aad2/crio-112fca0ae97c292f5bab3b89c38db724f8d95834c31e36e8e6d49e3e93d44be8 WatchSource:0}: Error finding container 112fca0ae97c292f5bab3b89c38db724f8d95834c31e36e8e6d49e3e93d44be8: Status 404 returned error can't find the container with id 112fca0ae97c292f5bab3b89c38db724f8d95834c31e36e8e6d49e3e93d44be8
	Oct 29 09:31:54 old-k8s-version-162751 kubelet[1369]: W1029 09:31:54.446967    1369 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ff565e88a53d8edb5070c5f93c9d1fefa68d9886105b4ed571d8aace0997aad2/crio-c4e9a732765227f5861c7fe1468f660dcd09ce6ee29c4d015525d9ba028642b7 WatchSource:0}: Error finding container c4e9a732765227f5861c7fe1468f660dcd09ce6ee29c4d015525d9ba028642b7: Status 404 returned error can't find the container with id c4e9a732765227f5861c7fe1468f660dcd09ce6ee29c4d015525d9ba028642b7
	Oct 29 09:31:54 old-k8s-version-162751 kubelet[1369]: I1029 09:31:54.880573    1369 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-zvr7g" podStartSLOduration=0.880526187 podCreationTimestamp="2025-10-29 09:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:31:54.876680593 +0000 UTC m=+13.457995849" watchObservedRunningTime="2025-10-29 09:31:54.880526187 +0000 UTC m=+13.461841460"
	Oct 29 09:32:08 old-k8s-version-162751 kubelet[1369]: I1029 09:32:08.131883    1369 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 29 09:32:08 old-k8s-version-162751 kubelet[1369]: I1029 09:32:08.163980    1369 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-2dggr" podStartSLOduration=10.967869031 podCreationTimestamp="2025-10-29 09:31:54 +0000 UTC" firstStartedPulling="2025-10-29 09:31:54.421334276 +0000 UTC m=+13.002649516" lastFinishedPulling="2025-10-29 09:31:57.617394972 +0000 UTC m=+16.198710212" observedRunningTime="2025-10-29 09:31:57.912082167 +0000 UTC m=+16.493397415" watchObservedRunningTime="2025-10-29 09:32:08.163929727 +0000 UTC m=+26.745244975"
	Oct 29 09:32:08 old-k8s-version-162751 kubelet[1369]: I1029 09:32:08.164433    1369 topology_manager.go:215] "Topology Admit Handler" podUID="9e3006d1-0dd2-4238-b886-f3226c1afced" podNamespace="kube-system" podName="coredns-5dd5756b68-dq48g"
	Oct 29 09:32:08 old-k8s-version-162751 kubelet[1369]: I1029 09:32:08.168892    1369 topology_manager.go:215] "Topology Admit Handler" podUID="ea29eea0-c9cd-40b5-8c5c-12469da20364" podNamespace="kube-system" podName="storage-provisioner"
	Oct 29 09:32:08 old-k8s-version-162751 kubelet[1369]: I1029 09:32:08.207227    1369 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t7s6\" (UniqueName: \"kubernetes.io/projected/9e3006d1-0dd2-4238-b886-f3226c1afced-kube-api-access-9t7s6\") pod \"coredns-5dd5756b68-dq48g\" (UID: \"9e3006d1-0dd2-4238-b886-f3226c1afced\") " pod="kube-system/coredns-5dd5756b68-dq48g"
	Oct 29 09:32:08 old-k8s-version-162751 kubelet[1369]: I1029 09:32:08.207286    1369 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpvqm\" (UniqueName: \"kubernetes.io/projected/ea29eea0-c9cd-40b5-8c5c-12469da20364-kube-api-access-lpvqm\") pod \"storage-provisioner\" (UID: \"ea29eea0-c9cd-40b5-8c5c-12469da20364\") " pod="kube-system/storage-provisioner"
	Oct 29 09:32:08 old-k8s-version-162751 kubelet[1369]: I1029 09:32:08.207317    1369 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ea29eea0-c9cd-40b5-8c5c-12469da20364-tmp\") pod \"storage-provisioner\" (UID: \"ea29eea0-c9cd-40b5-8c5c-12469da20364\") " pod="kube-system/storage-provisioner"
	Oct 29 09:32:08 old-k8s-version-162751 kubelet[1369]: I1029 09:32:08.207344    1369 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e3006d1-0dd2-4238-b886-f3226c1afced-config-volume\") pod \"coredns-5dd5756b68-dq48g\" (UID: \"9e3006d1-0dd2-4238-b886-f3226c1afced\") " pod="kube-system/coredns-5dd5756b68-dq48g"
	Oct 29 09:32:08 old-k8s-version-162751 kubelet[1369]: W1029 09:32:08.483498    1369 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ff565e88a53d8edb5070c5f93c9d1fefa68d9886105b4ed571d8aace0997aad2/crio-d1116f4457d698f42052d2634f8b738f66173ef964b56ebc2a36a56443b2d52a WatchSource:0}: Error finding container d1116f4457d698f42052d2634f8b738f66173ef964b56ebc2a36a56443b2d52a: Status 404 returned error can't find the container with id d1116f4457d698f42052d2634f8b738f66173ef964b56ebc2a36a56443b2d52a
	Oct 29 09:32:08 old-k8s-version-162751 kubelet[1369]: W1029 09:32:08.513134    1369 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ff565e88a53d8edb5070c5f93c9d1fefa68d9886105b4ed571d8aace0997aad2/crio-93770248960cb2ef93ab2e01f7a7775a50fb4a48224d56945e840c0f13499a24 WatchSource:0}: Error finding container 93770248960cb2ef93ab2e01f7a7775a50fb4a48224d56945e840c0f13499a24: Status 404 returned error can't find the container with id 93770248960cb2ef93ab2e01f7a7775a50fb4a48224d56945e840c0f13499a24
	Oct 29 09:32:08 old-k8s-version-162751 kubelet[1369]: I1029 09:32:08.970932    1369 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-dq48g" podStartSLOduration=14.970888322 podCreationTimestamp="2025-10-29 09:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:32:08.936454523 +0000 UTC m=+27.517769763" watchObservedRunningTime="2025-10-29 09:32:08.970888322 +0000 UTC m=+27.552203570"
	Oct 29 09:32:10 old-k8s-version-162751 kubelet[1369]: I1029 09:32:10.965237    1369 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.965196615 podCreationTimestamp="2025-10-29 09:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:32:09.033673745 +0000 UTC m=+27.614988985" watchObservedRunningTime="2025-10-29 09:32:10.965196615 +0000 UTC m=+29.546511855"
	Oct 29 09:32:10 old-k8s-version-162751 kubelet[1369]: I1029 09:32:10.965523    1369 topology_manager.go:215] "Topology Admit Handler" podUID="1c908328-618a-4e6e-a19d-9960059ef8a7" podNamespace="default" podName="busybox"
	Oct 29 09:32:11 old-k8s-version-162751 kubelet[1369]: I1029 09:32:11.026607    1369 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bntdx\" (UniqueName: \"kubernetes.io/projected/1c908328-618a-4e6e-a19d-9960059ef8a7-kube-api-access-bntdx\") pod \"busybox\" (UID: \"1c908328-618a-4e6e-a19d-9960059ef8a7\") " pod="default/busybox"
	Oct 29 09:32:11 old-k8s-version-162751 kubelet[1369]: W1029 09:32:11.288797    1369 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ff565e88a53d8edb5070c5f93c9d1fefa68d9886105b4ed571d8aace0997aad2/crio-90de8350b7931624017d79b22d9a43b2ad9a0e613d979b361456042f04b43439 WatchSource:0}: Error finding container 90de8350b7931624017d79b22d9a43b2ad9a0e613d979b361456042f04b43439: Status 404 returned error can't find the container with id 90de8350b7931624017d79b22d9a43b2ad9a0e613d979b361456042f04b43439
	Oct 29 09:32:13 old-k8s-version-162751 kubelet[1369]: I1029 09:32:13.944575    1369 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.718070314 podCreationTimestamp="2025-10-29 09:32:10 +0000 UTC" firstStartedPulling="2025-10-29 09:32:11.290630601 +0000 UTC m=+29.871945841" lastFinishedPulling="2025-10-29 09:32:13.517088378 +0000 UTC m=+32.098403618" observedRunningTime="2025-10-29 09:32:13.943804418 +0000 UTC m=+32.525119666" watchObservedRunningTime="2025-10-29 09:32:13.944528091 +0000 UTC m=+32.525843339"
	
	
	==> storage-provisioner [fbc07829a1fb664249eb2b1e7bbc6cb8c487e31b4b87f72e13fe84d7c99c1d91] <==
	I1029 09:32:08.540915       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1029 09:32:08.557846       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1029 09:32:08.557890       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1029 09:32:08.571461       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1029 09:32:08.571624       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-162751_983b3207-821a-47a2-adfd-1f0b75019793!
	I1029 09:32:08.574065       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"99a84a4b-3609-4d3c-a5d7-cfe05ff94030", APIVersion:"v1", ResourceVersion:"397", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-162751_983b3207-821a-47a2-adfd-1f0b75019793 became leader
	I1029 09:32:08.671991       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-162751_983b3207-821a-47a2-adfd-1f0b75019793!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-162751 -n old-k8s-version-162751
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-162751 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (7.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-162751 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-162751 --alsologtostderr -v=1: exit status 80 (2.528444978s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-162751 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 09:33:35.277873  187270 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:33:35.278040  187270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:33:35.278050  187270 out.go:374] Setting ErrFile to fd 2...
	I1029 09:33:35.278055  187270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:33:35.278337  187270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 09:33:35.278634  187270 out.go:368] Setting JSON to false
	I1029 09:33:35.278663  187270 mustload.go:66] Loading cluster: old-k8s-version-162751
	I1029 09:33:35.279321  187270 config.go:182] Loaded profile config "old-k8s-version-162751": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1029 09:33:35.280616  187270 cli_runner.go:164] Run: docker container inspect old-k8s-version-162751 --format={{.State.Status}}
	I1029 09:33:35.313726  187270 host.go:66] Checking if "old-k8s-version-162751" exists ...
	I1029 09:33:35.314015  187270 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:33:35.386443  187270 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-29 09:33:35.376541696 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:33:35.387085  187270 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-162751 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1029 09:33:35.391264  187270 out.go:179] * Pausing node old-k8s-version-162751 ... 
	I1029 09:33:35.394369  187270 host.go:66] Checking if "old-k8s-version-162751" exists ...
	I1029 09:33:35.395548  187270 ssh_runner.go:195] Run: systemctl --version
	I1029 09:33:35.395605  187270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-162751
	I1029 09:33:35.423275  187270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/old-k8s-version-162751/id_rsa Username:docker}
	I1029 09:33:35.535772  187270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:33:35.578436  187270 pause.go:52] kubelet running: true
	I1029 09:33:35.578509  187270 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:33:35.883574  187270 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:33:35.883672  187270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:33:35.980462  187270 cri.go:89] found id: "402b270dff1ce36c60626612f013ab04776b9d0049122dd1fc5aa0d5c98c2b9b"
	I1029 09:33:35.980482  187270 cri.go:89] found id: "47dbb1a9d8df6448d47893f3a3717f32a5db0b3f6ef22f1cd8df505a4683dc91"
	I1029 09:33:35.980487  187270 cri.go:89] found id: "6d708d6e42dc5a9946f22d32d03679cc175c450447d9212eb86a65c47fc6a6af"
	I1029 09:33:35.980491  187270 cri.go:89] found id: "4b73cff8c02ceab1c3bbb9d9b208c88f66a612ab8519eaf85d5e65cd9bf0e4b8"
	I1029 09:33:35.980495  187270 cri.go:89] found id: "2caaaff66733a607dc3dcf0a9fda574cba6e68a7ed1972b5ba272c9ebca233b9"
	I1029 09:33:35.980499  187270 cri.go:89] found id: "b78acb0b4196df036f132bb8dbe1317e4d47239b19065d5c77f8dbaf30d95978"
	I1029 09:33:35.980503  187270 cri.go:89] found id: "a5366971dd2d52fc09c0ee8faad87d9d554996df31f7e1674b9d9b415dce9d79"
	I1029 09:33:35.980506  187270 cri.go:89] found id: "85f50a83501bd8c007c1d4b5360ff663d8311adaae8d6d89173f2b09d0a448dc"
	I1029 09:33:35.980509  187270 cri.go:89] found id: "15deeb92de4799a5896e0b1d2bb95ad8660db0e8da65e42390544d6bec6b7088"
	I1029 09:33:35.980516  187270 cri.go:89] found id: "09a02eb7dd887de0741e25b4b79c14c2fd3e8f09ad895116c5f8a75ad2bc567c"
	I1029 09:33:35.980523  187270 cri.go:89] found id: "7cc19ed872138312e19fcf79fd56294e7666859c5fa415c4222ecb63f7ac594a"
	I1029 09:33:35.980526  187270 cri.go:89] found id: ""
	I1029 09:33:35.980573  187270 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:33:36.006926  187270 retry.go:31] will retry after 263.253018ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:33:36Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:33:36.271234  187270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:33:36.284851  187270 pause.go:52] kubelet running: false
	I1029 09:33:36.284927  187270 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:33:36.472886  187270 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:33:36.472966  187270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:33:36.542381  187270 cri.go:89] found id: "402b270dff1ce36c60626612f013ab04776b9d0049122dd1fc5aa0d5c98c2b9b"
	I1029 09:33:36.542403  187270 cri.go:89] found id: "47dbb1a9d8df6448d47893f3a3717f32a5db0b3f6ef22f1cd8df505a4683dc91"
	I1029 09:33:36.542409  187270 cri.go:89] found id: "6d708d6e42dc5a9946f22d32d03679cc175c450447d9212eb86a65c47fc6a6af"
	I1029 09:33:36.542412  187270 cri.go:89] found id: "4b73cff8c02ceab1c3bbb9d9b208c88f66a612ab8519eaf85d5e65cd9bf0e4b8"
	I1029 09:33:36.542415  187270 cri.go:89] found id: "2caaaff66733a607dc3dcf0a9fda574cba6e68a7ed1972b5ba272c9ebca233b9"
	I1029 09:33:36.542420  187270 cri.go:89] found id: "b78acb0b4196df036f132bb8dbe1317e4d47239b19065d5c77f8dbaf30d95978"
	I1029 09:33:36.542423  187270 cri.go:89] found id: "a5366971dd2d52fc09c0ee8faad87d9d554996df31f7e1674b9d9b415dce9d79"
	I1029 09:33:36.542426  187270 cri.go:89] found id: "85f50a83501bd8c007c1d4b5360ff663d8311adaae8d6d89173f2b09d0a448dc"
	I1029 09:33:36.542443  187270 cri.go:89] found id: "15deeb92de4799a5896e0b1d2bb95ad8660db0e8da65e42390544d6bec6b7088"
	I1029 09:33:36.542453  187270 cri.go:89] found id: "09a02eb7dd887de0741e25b4b79c14c2fd3e8f09ad895116c5f8a75ad2bc567c"
	I1029 09:33:36.542456  187270 cri.go:89] found id: "7cc19ed872138312e19fcf79fd56294e7666859c5fa415c4222ecb63f7ac594a"
	I1029 09:33:36.542460  187270 cri.go:89] found id: ""
	I1029 09:33:36.542514  187270 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:33:36.553797  187270 retry.go:31] will retry after 234.800696ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:33:36Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:33:36.789289  187270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:33:36.808908  187270 pause.go:52] kubelet running: false
	I1029 09:33:36.808969  187270 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:33:36.997761  187270 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:33:36.997838  187270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:33:37.069373  187270 cri.go:89] found id: "402b270dff1ce36c60626612f013ab04776b9d0049122dd1fc5aa0d5c98c2b9b"
	I1029 09:33:37.069394  187270 cri.go:89] found id: "47dbb1a9d8df6448d47893f3a3717f32a5db0b3f6ef22f1cd8df505a4683dc91"
	I1029 09:33:37.069400  187270 cri.go:89] found id: "6d708d6e42dc5a9946f22d32d03679cc175c450447d9212eb86a65c47fc6a6af"
	I1029 09:33:37.069404  187270 cri.go:89] found id: "4b73cff8c02ceab1c3bbb9d9b208c88f66a612ab8519eaf85d5e65cd9bf0e4b8"
	I1029 09:33:37.069408  187270 cri.go:89] found id: "2caaaff66733a607dc3dcf0a9fda574cba6e68a7ed1972b5ba272c9ebca233b9"
	I1029 09:33:37.069412  187270 cri.go:89] found id: "b78acb0b4196df036f132bb8dbe1317e4d47239b19065d5c77f8dbaf30d95978"
	I1029 09:33:37.069415  187270 cri.go:89] found id: "a5366971dd2d52fc09c0ee8faad87d9d554996df31f7e1674b9d9b415dce9d79"
	I1029 09:33:37.069464  187270 cri.go:89] found id: "85f50a83501bd8c007c1d4b5360ff663d8311adaae8d6d89173f2b09d0a448dc"
	I1029 09:33:37.069476  187270 cri.go:89] found id: "15deeb92de4799a5896e0b1d2bb95ad8660db0e8da65e42390544d6bec6b7088"
	I1029 09:33:37.069483  187270 cri.go:89] found id: "09a02eb7dd887de0741e25b4b79c14c2fd3e8f09ad895116c5f8a75ad2bc567c"
	I1029 09:33:37.069486  187270 cri.go:89] found id: "7cc19ed872138312e19fcf79fd56294e7666859c5fa415c4222ecb63f7ac594a"
	I1029 09:33:37.069489  187270 cri.go:89] found id: ""
	I1029 09:33:37.069552  187270 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:33:37.080442  187270 retry.go:31] will retry after 376.159509ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:33:37Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:33:37.456887  187270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:33:37.470073  187270 pause.go:52] kubelet running: false
	I1029 09:33:37.470139  187270 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:33:37.642577  187270 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:33:37.642669  187270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:33:37.711855  187270 cri.go:89] found id: "402b270dff1ce36c60626612f013ab04776b9d0049122dd1fc5aa0d5c98c2b9b"
	I1029 09:33:37.711879  187270 cri.go:89] found id: "47dbb1a9d8df6448d47893f3a3717f32a5db0b3f6ef22f1cd8df505a4683dc91"
	I1029 09:33:37.711885  187270 cri.go:89] found id: "6d708d6e42dc5a9946f22d32d03679cc175c450447d9212eb86a65c47fc6a6af"
	I1029 09:33:37.711889  187270 cri.go:89] found id: "4b73cff8c02ceab1c3bbb9d9b208c88f66a612ab8519eaf85d5e65cd9bf0e4b8"
	I1029 09:33:37.711893  187270 cri.go:89] found id: "2caaaff66733a607dc3dcf0a9fda574cba6e68a7ed1972b5ba272c9ebca233b9"
	I1029 09:33:37.711896  187270 cri.go:89] found id: "b78acb0b4196df036f132bb8dbe1317e4d47239b19065d5c77f8dbaf30d95978"
	I1029 09:33:37.711900  187270 cri.go:89] found id: "a5366971dd2d52fc09c0ee8faad87d9d554996df31f7e1674b9d9b415dce9d79"
	I1029 09:33:37.711903  187270 cri.go:89] found id: "85f50a83501bd8c007c1d4b5360ff663d8311adaae8d6d89173f2b09d0a448dc"
	I1029 09:33:37.711906  187270 cri.go:89] found id: "15deeb92de4799a5896e0b1d2bb95ad8660db0e8da65e42390544d6bec6b7088"
	I1029 09:33:37.711913  187270 cri.go:89] found id: "09a02eb7dd887de0741e25b4b79c14c2fd3e8f09ad895116c5f8a75ad2bc567c"
	I1029 09:33:37.711921  187270 cri.go:89] found id: "7cc19ed872138312e19fcf79fd56294e7666859c5fa415c4222ecb63f7ac594a"
	I1029 09:33:37.711924  187270 cri.go:89] found id: ""
	I1029 09:33:37.711971  187270 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:33:37.727391  187270 out.go:203] 
	W1029 09:33:37.730339  187270 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:33:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:33:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 09:33:37.730366  187270 out.go:285] * 
	* 
	W1029 09:33:37.737047  187270 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 09:33:37.740104  187270 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-162751 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-162751
helpers_test.go:243: (dbg) docker inspect old-k8s-version-162751:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ff565e88a53d8edb5070c5f93c9d1fefa68d9886105b4ed571d8aace0997aad2",
	        "Created": "2025-10-29T09:31:17.309145207Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 185023,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:32:34.183234275Z",
	            "FinishedAt": "2025-10-29T09:32:33.368426646Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/ff565e88a53d8edb5070c5f93c9d1fefa68d9886105b4ed571d8aace0997aad2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ff565e88a53d8edb5070c5f93c9d1fefa68d9886105b4ed571d8aace0997aad2/hostname",
	        "HostsPath": "/var/lib/docker/containers/ff565e88a53d8edb5070c5f93c9d1fefa68d9886105b4ed571d8aace0997aad2/hosts",
	        "LogPath": "/var/lib/docker/containers/ff565e88a53d8edb5070c5f93c9d1fefa68d9886105b4ed571d8aace0997aad2/ff565e88a53d8edb5070c5f93c9d1fefa68d9886105b4ed571d8aace0997aad2-json.log",
	        "Name": "/old-k8s-version-162751",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-162751:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-162751",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ff565e88a53d8edb5070c5f93c9d1fefa68d9886105b4ed571d8aace0997aad2",
	                "LowerDir": "/var/lib/docker/overlay2/04ad89da0567c27cf19c3a878c1a373075d3240512b0417dad3b82758bcec18e-init/diff:/var/lib/docker/overlay2/512c003c31c6c889d7180677d0a7d04f9641d651bb7c0f829b95ca7e47c2836c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/04ad89da0567c27cf19c3a878c1a373075d3240512b0417dad3b82758bcec18e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/04ad89da0567c27cf19c3a878c1a373075d3240512b0417dad3b82758bcec18e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/04ad89da0567c27cf19c3a878c1a373075d3240512b0417dad3b82758bcec18e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-162751",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-162751/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-162751",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-162751",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-162751",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "94b206aea933a7a380a8c1275c31a4039b67d22639ed7e5e86bbd757be0b118e",
	            "SandboxKey": "/var/run/docker/netns/94b206aea933",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33048"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33050"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-162751": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:bb:58:3a:d5:87",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b39d4ca145f787b1920f94a4f3933ceac95f90f60a1cf8cbdf99d14ff53419fa",
	                    "EndpointID": "e595ce90cf8c664afa924ce4b7be34561fbedaa2c755ba1aa3f56c862bdd6a05",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-162751",
	                        "ff565e88a53d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-162751 -n old-k8s-version-162751
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-162751 -n old-k8s-version-162751: exit status 2 (343.334112ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-162751 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-162751 logs -n 25: (1.291338016s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-937200 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo containerd config dump                                                                                                                                                                                                  │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo crio config                                                                                                                                                                                                             │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ delete  │ -p cilium-937200                                                                                                                                                                                                                              │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │ 29 Oct 25 09:29 UTC │
	│ start   │ -p cert-expiration-690444 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-690444   │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │ 29 Oct 25 09:30 UTC │
	│ delete  │ -p force-systemd-env-116185                                                                                                                                                                                                                   │ force-systemd-env-116185 │ jenkins │ v1.37.0 │ 29 Oct 25 09:30 UTC │ 29 Oct 25 09:30 UTC │
	│ start   │ -p cert-options-699236 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-699236      │ jenkins │ v1.37.0 │ 29 Oct 25 09:30 UTC │ 29 Oct 25 09:31 UTC │
	│ ssh     │ cert-options-699236 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-699236      │ jenkins │ v1.37.0 │ 29 Oct 25 09:31 UTC │ 29 Oct 25 09:31 UTC │
	│ ssh     │ -p cert-options-699236 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-699236      │ jenkins │ v1.37.0 │ 29 Oct 25 09:31 UTC │ 29 Oct 25 09:31 UTC │
	│ delete  │ -p cert-options-699236                                                                                                                                                                                                                        │ cert-options-699236      │ jenkins │ v1.37.0 │ 29 Oct 25 09:31 UTC │ 29 Oct 25 09:31 UTC │
	│ start   │ -p old-k8s-version-162751 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:31 UTC │ 29 Oct 25 09:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-162751 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:32 UTC │                     │
	│ stop    │ -p old-k8s-version-162751 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:32 UTC │ 29 Oct 25 09:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-162751 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:32 UTC │ 29 Oct 25 09:32 UTC │
	│ start   │ -p old-k8s-version-162751 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:32 UTC │ 29 Oct 25 09:33 UTC │
	│ start   │ -p cert-expiration-690444 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-690444   │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │                     │
	│ image   │ old-k8s-version-162751 image list --format=json                                                                                                                                                                                               │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ pause   │ -p old-k8s-version-162751 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:33:33
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:33:33.819085  187060 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:33:33.819202  187060 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:33:33.819207  187060 out.go:374] Setting ErrFile to fd 2...
	I1029 09:33:33.819211  187060 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:33:33.819472  187060 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 09:33:33.819848  187060 out.go:368] Setting JSON to false
	I1029 09:33:33.821106  187060 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4566,"bootTime":1761725848,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1029 09:33:33.821168  187060 start.go:143] virtualization:  
	I1029 09:33:33.824833  187060 out.go:179] * [cert-expiration-690444] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1029 09:33:33.828583  187060 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:33:33.828721  187060 notify.go:221] Checking for updates...
	I1029 09:33:33.834811  187060 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:33:33.837749  187060 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:33:33.840582  187060 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	I1029 09:33:33.843499  187060 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1029 09:33:33.846408  187060 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:33:33.849932  187060 config.go:182] Loaded profile config "cert-expiration-690444": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:33:33.850610  187060 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:33:33.877203  187060 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1029 09:33:33.877301  187060 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:33:33.943824  187060 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-29 09:33:33.933616587 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:33:33.943912  187060 docker.go:319] overlay module found
	I1029 09:33:33.946988  187060 out.go:179] * Using the docker driver based on existing profile
	I1029 09:33:33.949766  187060 start.go:309] selected driver: docker
	I1029 09:33:33.949776  187060 start.go:930] validating driver "docker" against &{Name:cert-expiration-690444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-690444 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:33:33.949872  187060 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:33:33.950618  187060 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:33:34.020705  187060 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-29 09:33:34.010115636 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:33:34.021010  187060 cni.go:84] Creating CNI manager for ""
	I1029 09:33:34.021069  187060 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:33:34.021111  187060 start.go:353] cluster config:
	{Name:cert-expiration-690444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-690444 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1029 09:33:34.024256  187060 out.go:179] * Starting "cert-expiration-690444" primary control-plane node in "cert-expiration-690444" cluster
	I1029 09:33:34.027132  187060 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:33:34.030080  187060 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:33:34.032884  187060 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:33:34.032932  187060 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1029 09:33:34.032939  187060 cache.go:59] Caching tarball of preloaded images
	I1029 09:33:34.032939  187060 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:33:34.033017  187060 preload.go:233] Found /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1029 09:33:34.033027  187060 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 09:33:34.033133  187060 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/cert-expiration-690444/config.json ...
	I1029 09:33:34.053013  187060 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:33:34.053023  187060 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:33:34.053042  187060 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:33:34.053063  187060 start.go:360] acquireMachinesLock for cert-expiration-690444: {Name:mk45a13a7ff76a9410b822199a57af2cad65c665 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:33:34.053122  187060 start.go:364] duration metric: took 43.553µs to acquireMachinesLock for "cert-expiration-690444"
	I1029 09:33:34.053139  187060 start.go:96] Skipping create...Using existing machine configuration
	I1029 09:33:34.053144  187060 fix.go:54] fixHost starting: 
	I1029 09:33:34.053401  187060 cli_runner.go:164] Run: docker container inspect cert-expiration-690444 --format={{.State.Status}}
	I1029 09:33:34.071065  187060 fix.go:112] recreateIfNeeded on cert-expiration-690444: state=Running err=<nil>
	W1029 09:33:34.071085  187060 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Oct 29 09:33:21 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:21.139326049Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:33:21 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:21.147079689Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:33:21 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:21.149720893Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:33:21 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:21.166619668Z" level=info msg="Created container 09a02eb7dd887de0741e25b4b79c14c2fd3e8f09ad895116c5f8a75ad2bc567c: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f4h82/dashboard-metrics-scraper" id=894433a5-4856-484f-b896-1c3e49d2bc1e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:33:21 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:21.172304188Z" level=info msg="Starting container: 09a02eb7dd887de0741e25b4b79c14c2fd3e8f09ad895116c5f8a75ad2bc567c" id=b57d7587-f1b6-44a3-995e-aa88df073659 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:33:21 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:21.179726314Z" level=info msg="Started container" PID=1627 containerID=09a02eb7dd887de0741e25b4b79c14c2fd3e8f09ad895116c5f8a75ad2bc567c description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f4h82/dashboard-metrics-scraper id=b57d7587-f1b6-44a3-995e-aa88df073659 name=/runtime.v1.RuntimeService/StartContainer sandboxID=85f3b24f28dd19b39a4d86f915663fbb5fb643e137f5109cee2704d15ddc514d
	Oct 29 09:33:21 old-k8s-version-162751 conmon[1625]: conmon 09a02eb7dd887de0741e <ninfo>: container 1627 exited with status 1
	Oct 29 09:33:21 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:21.322068094Z" level=info msg="Removing container: 4695eb8d040b2c4ba3a0c50706e8639ec40c46214d5ddd8f4f8573e13a500182" id=f998ef5c-ad02-4024-bc90-58604a5184a9 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:33:21 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:21.336687811Z" level=info msg="Error loading conmon cgroup of container 4695eb8d040b2c4ba3a0c50706e8639ec40c46214d5ddd8f4f8573e13a500182: cgroup deleted" id=f998ef5c-ad02-4024-bc90-58604a5184a9 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:33:21 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:21.34384966Z" level=info msg="Removed container 4695eb8d040b2c4ba3a0c50706e8639ec40c46214d5ddd8f4f8573e13a500182: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f4h82/dashboard-metrics-scraper" id=f998ef5c-ad02-4024-bc90-58604a5184a9 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:33:28 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:28.864041534Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:33:28 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:28.867970987Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:33:28 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:28.868002618Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:33:28 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:28.868026577Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:33:28 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:28.871126911Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:33:28 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:28.871159912Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:33:28 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:28.871181533Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:33:28 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:28.874482139Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:33:28 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:28.874515157Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:33:28 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:28.874536605Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:33:28 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:28.877595133Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:33:28 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:28.877628734Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:33:28 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:28.877651167Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:33:28 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:28.881247784Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:33:28 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:28.881283156Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	09a02eb7dd887       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago      Exited              dashboard-metrics-scraper   2                   85f3b24f28dd1       dashboard-metrics-scraper-5f989dc9cf-f4h82       kubernetes-dashboard
	402b270dff1ce       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           19 seconds ago      Running             storage-provisioner         2                   19ef0b6dc7ed0       storage-provisioner                              kube-system
	7cc19ed872138       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   29 seconds ago      Running             kubernetes-dashboard        0                   aa06a2a05cc3f       kubernetes-dashboard-8694d4445c-dvv98            kubernetes-dashboard
	3535f67a8b8f5       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           50 seconds ago      Running             busybox                     1                   1a206b5fc832c       busybox                                          default
	47dbb1a9d8df6       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           50 seconds ago      Running             coredns                     1                   5774f8817df38       coredns-5dd5756b68-dq48g                         kube-system
	6d708d6e42dc5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           50 seconds ago      Exited              storage-provisioner         1                   19ef0b6dc7ed0       storage-provisioner                              kube-system
	4b73cff8c02ce       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           50 seconds ago      Running             kindnet-cni                 1                   61d1ed7b41d68       kindnet-2dggr                                    kube-system
	2caaaff66733a       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           50 seconds ago      Running             kube-proxy                  1                   73700817e8792       kube-proxy-zvr7g                                 kube-system
	b78acb0b4196d       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           56 seconds ago      Running             etcd                        1                   b5185cf796496       etcd-old-k8s-version-162751                      kube-system
	a5366971dd2d5       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           57 seconds ago      Running             kube-controller-manager     1                   3e69437ec77b2       kube-controller-manager-old-k8s-version-162751   kube-system
	85f50a83501bd       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           57 seconds ago      Running             kube-scheduler              1                   cc5a80da84aba       kube-scheduler-old-k8s-version-162751            kube-system
	15deeb92de479       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           57 seconds ago      Running             kube-apiserver              1                   44e0c88c1a410       kube-apiserver-old-k8s-version-162751            kube-system
	
	
	==> coredns [47dbb1a9d8df6448d47893f3a3717f32a5db0b3f6ef22f1cd8df505a4683dc91] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35346 - 4927 "HINFO IN 2646169345322118318.2806814514088548949. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.005398166s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-162751
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-162751
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=old-k8s-version-162751
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_31_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:31:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-162751
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:33:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:33:17 +0000   Wed, 29 Oct 2025 09:31:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:33:17 +0000   Wed, 29 Oct 2025 09:31:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:33:17 +0000   Wed, 29 Oct 2025 09:31:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 09:33:17 +0000   Wed, 29 Oct 2025 09:32:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-162751
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                fe615db9-32dc-431b-8163-4556fb5b38ef
	  Boot ID:                    dcb5a4aa-84b0-4edf-aeb7-a96ccf6c0882
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-5dd5756b68-dq48g                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     104s
	  kube-system                 etcd-old-k8s-version-162751                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         116s
	  kube-system                 kindnet-2dggr                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-old-k8s-version-162751             250m (12%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-old-k8s-version-162751    200m (10%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-zvr7g                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-old-k8s-version-162751             100m (5%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-f4h82        0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-dvv98             0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 104s               kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  NodeHasSufficientMemory  117s               kubelet          Node old-k8s-version-162751 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s               kubelet          Node old-k8s-version-162751 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s               kubelet          Node old-k8s-version-162751 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           105s               node-controller  Node old-k8s-version-162751 event: Registered Node old-k8s-version-162751 in Controller
	  Normal  NodeReady                90s                kubelet          Node old-k8s-version-162751 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node old-k8s-version-162751 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node old-k8s-version-162751 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)  kubelet          Node old-k8s-version-162751 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           39s                node-controller  Node old-k8s-version-162751 event: Registered Node old-k8s-version-162751 in Controller
	
	
	==> dmesg <==
	[Oct29 09:04] overlayfs: idmapped layers are currently not supported
	[Oct29 09:05] overlayfs: idmapped layers are currently not supported
	[Oct29 09:06] overlayfs: idmapped layers are currently not supported
	[Oct29 09:07] overlayfs: idmapped layers are currently not supported
	[Oct29 09:08] overlayfs: idmapped layers are currently not supported
	[Oct29 09:10] overlayfs: idmapped layers are currently not supported
	[ +24.018500] overlayfs: idmapped layers are currently not supported
	[  +4.070732] overlayfs: idmapped layers are currently not supported
	[Oct29 09:11] overlayfs: idmapped layers are currently not supported
	[ +18.424492] overlayfs: idmapped layers are currently not supported
	[  +4.342269] hrtimer: interrupt took 2289025 ns
	[Oct29 09:12] overlayfs: idmapped layers are currently not supported
	[Oct29 09:13] overlayfs: idmapped layers are currently not supported
	[Oct29 09:14] overlayfs: idmapped layers are currently not supported
	[Oct29 09:20] overlayfs: idmapped layers are currently not supported
	[Oct29 09:23] overlayfs: idmapped layers are currently not supported
	[Oct29 09:24] overlayfs: idmapped layers are currently not supported
	[ +30.917844] overlayfs: idmapped layers are currently not supported
	[Oct29 09:27] overlayfs: idmapped layers are currently not supported
	[Oct29 09:29] overlayfs: idmapped layers are currently not supported
	[Oct29 09:30] overlayfs: idmapped layers are currently not supported
	[  +5.608805] overlayfs: idmapped layers are currently not supported
	[ +37.422429] overlayfs: idmapped layers are currently not supported
	[Oct29 09:31] overlayfs: idmapped layers are currently not supported
	[Oct29 09:32] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b78acb0b4196df036f132bb8dbe1317e4d47239b19065d5c77f8dbaf30d95978] <==
	{"level":"info","ts":"2025-10-29T09:32:42.414721Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-29T09:32:42.414742Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-29T09:32:42.415049Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-29T09:32:42.415131Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-29T09:32:42.415239Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-29T09:32:42.415265Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-29T09:32:42.449448Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-29T09:32:42.449605Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-29T09:32:42.449616Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-29T09:32:42.45657Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-29T09:32:42.45652Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-29T09:32:43.715948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-29T09:32:43.716264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-29T09:32:43.716513Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-29T09:32:43.716708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-29T09:32:43.716746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-29T09:32:43.716793Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-29T09:32:43.716829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-29T09:32:43.727724Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-162751 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-29T09:32:43.727973Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-29T09:32:43.732503Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-29T09:32:43.733575Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-29T09:32:43.736595Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-29T09:32:43.73711Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-29T09:32:43.742294Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 09:33:39 up  1:16,  0 user,  load average: 1.85, 3.13, 2.51
	Linux old-k8s-version-162751 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4b73cff8c02ceab1c3bbb9d9b208c88f66a612ab8519eaf85d5e65cd9bf0e4b8] <==
	I1029 09:32:48.661945       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:32:48.662199       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1029 09:32:48.662324       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:32:48.662336       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:32:48.662348       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:32:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:32:48.858435       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:32:48.858451       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:32:48.858459       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:32:48.859118       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1029 09:33:18.858614       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1029 09:33:18.858747       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1029 09:33:18.858853       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1029 09:33:18.860712       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1029 09:33:20.159545       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 09:33:20.159620       1 metrics.go:72] Registering metrics
	I1029 09:33:20.159702       1 controller.go:711] "Syncing nftables rules"
	I1029 09:33:28.863714       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1029 09:33:28.863749       1 main.go:301] handling current node
	I1029 09:33:38.864532       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1029 09:33:38.864567       1 main.go:301] handling current node
	
	
	==> kube-apiserver [15deeb92de4799a5896e0b1d2bb95ad8660db0e8da65e42390544d6bec6b7088] <==
	I1029 09:32:47.127875       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1029 09:32:47.128190       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1029 09:32:47.169014       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1029 09:32:47.169357       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1029 09:32:47.185869       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1029 09:32:47.186142       1 shared_informer.go:318] Caches are synced for configmaps
	I1029 09:32:47.191097       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1029 09:32:47.191248       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1029 09:32:47.195015       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1029 09:32:47.197423       1 aggregator.go:166] initial CRD sync complete...
	I1029 09:32:47.197639       1 autoregister_controller.go:141] Starting autoregister controller
	I1029 09:32:47.197679       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1029 09:32:47.197726       1 cache.go:39] Caches are synced for autoregister controller
	E1029 09:32:47.259144       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1029 09:32:47.777639       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:32:49.005726       1 controller.go:624] quota admission added evaluator for: namespaces
	I1029 09:32:49.065941       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1029 09:32:49.101285       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:32:49.132577       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 09:32:49.145194       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1029 09:32:49.264775       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.81.224"}
	I1029 09:32:49.298106       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.109.241"}
	I1029 09:32:59.442547       1 controller.go:624] quota admission added evaluator for: endpoints
	I1029 09:32:59.464506       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1029 09:32:59.505743       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [a5366971dd2d52fc09c0ee8faad87d9d554996df31f7e1674b9d9b415dce9d79] <==
	I1029 09:32:59.589491       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="23.165797ms"
	I1029 09:32:59.589665       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="51.48µs"
	I1029 09:32:59.597224       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-f4h82"
	I1029 09:32:59.598546       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1029 09:32:59.610510       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-dvv98"
	I1029 09:32:59.625557       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="59.195865ms"
	I1029 09:32:59.653038       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="86.452667ms"
	I1029 09:32:59.676426       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.803491ms"
	I1029 09:32:59.676926       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="74.208µs"
	I1029 09:32:59.692759       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="79.23µs"
	I1029 09:32:59.719142       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="65.942408ms"
	I1029 09:32:59.720161       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="42.125µs"
	I1029 09:32:59.737970       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="48.706µs"
	I1029 09:32:59.944725       1 shared_informer.go:318] Caches are synced for garbage collector
	I1029 09:32:59.944753       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1029 09:32:59.973265       1 shared_informer.go:318] Caches are synced for garbage collector
	I1029 09:33:05.284244       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="40.279µs"
	I1029 09:33:06.298961       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="111.739µs"
	I1029 09:33:07.305719       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.13µs"
	I1029 09:33:09.311166       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="8.717299ms"
	I1029 09:33:09.311966       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="61.745µs"
	I1029 09:33:21.342121       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.352µs"
	I1029 09:33:21.867917       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.26266ms"
	I1029 09:33:21.868182       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="86.926µs"
	I1029 09:33:29.952859       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.664µs"
	
	
	==> kube-proxy [2caaaff66733a607dc3dcf0a9fda574cba6e68a7ed1972b5ba272c9ebca233b9] <==
	I1029 09:32:48.729441       1 server_others.go:69] "Using iptables proxy"
	I1029 09:32:48.752384       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1029 09:32:48.809284       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:32:48.814981       1 server_others.go:152] "Using iptables Proxier"
	I1029 09:32:48.815097       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1029 09:32:48.815135       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1029 09:32:48.815217       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1029 09:32:48.815597       1 server.go:846] "Version info" version="v1.28.0"
	I1029 09:32:48.815998       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:32:48.817811       1 config.go:188] "Starting service config controller"
	I1029 09:32:48.817982       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1029 09:32:48.818036       1 config.go:97] "Starting endpoint slice config controller"
	I1029 09:32:48.818063       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1029 09:32:48.820401       1 config.go:315] "Starting node config controller"
	I1029 09:32:48.820497       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1029 09:32:48.918801       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1029 09:32:48.918853       1 shared_informer.go:318] Caches are synced for service config
	I1029 09:32:48.920640       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [85f50a83501bd8c007c1d4b5360ff663d8311adaae8d6d89173f2b09d0a448dc] <==
	I1029 09:32:45.056199       1 serving.go:348] Generated self-signed cert in-memory
	W1029 09:32:46.964672       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1029 09:32:46.964764       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1029 09:32:46.964797       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1029 09:32:46.964830       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1029 09:32:47.058140       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1029 09:32:47.058251       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:32:47.060143       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1029 09:32:47.068648       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:32:47.068756       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1029 09:32:47.068800       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W1029 09:32:47.111753       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1029 09:32:47.111826       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1029 09:32:47.140050       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1029 09:32:47.140162       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1029 09:32:47.140504       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1029 09:32:47.140706       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1029 09:32:47.140624       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1029 09:32:47.140834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1029 09:32:47.140675       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1029 09:32:47.140922       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1029 09:32:48.369404       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 29 09:32:59 old-k8s-version-162751 kubelet[774]: I1029 09:32:59.736269     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khh9j\" (UniqueName: \"kubernetes.io/projected/18b22fc8-08c6-4108-ad39-49635e52ab91-kube-api-access-khh9j\") pod \"dashboard-metrics-scraper-5f989dc9cf-f4h82\" (UID: \"18b22fc8-08c6-4108-ad39-49635e52ab91\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f4h82"
	Oct 29 09:32:59 old-k8s-version-162751 kubelet[774]: I1029 09:32:59.736515     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwpqp\" (UniqueName: \"kubernetes.io/projected/7c0cb30a-8153-4136-80e4-1c87bbec948c-kube-api-access-vwpqp\") pod \"kubernetes-dashboard-8694d4445c-dvv98\" (UID: \"7c0cb30a-8153-4136-80e4-1c87bbec948c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dvv98"
	Oct 29 09:32:59 old-k8s-version-162751 kubelet[774]: I1029 09:32:59.736660     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7c0cb30a-8153-4136-80e4-1c87bbec948c-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-dvv98\" (UID: \"7c0cb30a-8153-4136-80e4-1c87bbec948c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dvv98"
	Oct 29 09:32:59 old-k8s-version-162751 kubelet[774]: W1029 09:32:59.958813     774 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ff565e88a53d8edb5070c5f93c9d1fefa68d9886105b4ed571d8aace0997aad2/crio-85f3b24f28dd19b39a4d86f915663fbb5fb643e137f5109cee2704d15ddc514d WatchSource:0}: Error finding container 85f3b24f28dd19b39a4d86f915663fbb5fb643e137f5109cee2704d15ddc514d: Status 404 returned error can't find the container with id 85f3b24f28dd19b39a4d86f915663fbb5fb643e137f5109cee2704d15ddc514d
	Oct 29 09:32:59 old-k8s-version-162751 kubelet[774]: W1029 09:32:59.992722     774 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ff565e88a53d8edb5070c5f93c9d1fefa68d9886105b4ed571d8aace0997aad2/crio-aa06a2a05cc3f42e94196b9e06881c1124223752566e801e7b7f4eb52e4a2ee2 WatchSource:0}: Error finding container aa06a2a05cc3f42e94196b9e06881c1124223752566e801e7b7f4eb52e4a2ee2: Status 404 returned error can't find the container with id aa06a2a05cc3f42e94196b9e06881c1124223752566e801e7b7f4eb52e4a2ee2
	Oct 29 09:33:05 old-k8s-version-162751 kubelet[774]: I1029 09:33:05.266167     774 scope.go:117] "RemoveContainer" containerID="fe08021aff09fbdea0f0a3cbae40c98dea9ab6e390cd38cdffa96652fdf38082"
	Oct 29 09:33:06 old-k8s-version-162751 kubelet[774]: I1029 09:33:06.277031     774 scope.go:117] "RemoveContainer" containerID="fe08021aff09fbdea0f0a3cbae40c98dea9ab6e390cd38cdffa96652fdf38082"
	Oct 29 09:33:06 old-k8s-version-162751 kubelet[774]: I1029 09:33:06.277353     774 scope.go:117] "RemoveContainer" containerID="4695eb8d040b2c4ba3a0c50706e8639ec40c46214d5ddd8f4f8573e13a500182"
	Oct 29 09:33:06 old-k8s-version-162751 kubelet[774]: E1029 09:33:06.277728     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f4h82_kubernetes-dashboard(18b22fc8-08c6-4108-ad39-49635e52ab91)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f4h82" podUID="18b22fc8-08c6-4108-ad39-49635e52ab91"
	Oct 29 09:33:07 old-k8s-version-162751 kubelet[774]: I1029 09:33:07.282464     774 scope.go:117] "RemoveContainer" containerID="4695eb8d040b2c4ba3a0c50706e8639ec40c46214d5ddd8f4f8573e13a500182"
	Oct 29 09:33:07 old-k8s-version-162751 kubelet[774]: E1029 09:33:07.282747     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f4h82_kubernetes-dashboard(18b22fc8-08c6-4108-ad39-49635e52ab91)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f4h82" podUID="18b22fc8-08c6-4108-ad39-49635e52ab91"
	Oct 29 09:33:09 old-k8s-version-162751 kubelet[774]: I1029 09:33:09.938442     774 scope.go:117] "RemoveContainer" containerID="4695eb8d040b2c4ba3a0c50706e8639ec40c46214d5ddd8f4f8573e13a500182"
	Oct 29 09:33:09 old-k8s-version-162751 kubelet[774]: E1029 09:33:09.938759     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f4h82_kubernetes-dashboard(18b22fc8-08c6-4108-ad39-49635e52ab91)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f4h82" podUID="18b22fc8-08c6-4108-ad39-49635e52ab91"
	Oct 29 09:33:19 old-k8s-version-162751 kubelet[774]: I1029 09:33:19.309964     774 scope.go:117] "RemoveContainer" containerID="6d708d6e42dc5a9946f22d32d03679cc175c450447d9212eb86a65c47fc6a6af"
	Oct 29 09:33:19 old-k8s-version-162751 kubelet[774]: I1029 09:33:19.338830     774 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dvv98" podStartSLOduration=11.123512307 podCreationTimestamp="2025-10-29 09:32:59 +0000 UTC" firstStartedPulling="2025-10-29 09:32:59.995376607 +0000 UTC m=+19.110538221" lastFinishedPulling="2025-10-29 09:33:09.209540224 +0000 UTC m=+28.324701838" observedRunningTime="2025-10-29 09:33:09.302133645 +0000 UTC m=+28.417295267" watchObservedRunningTime="2025-10-29 09:33:19.337675924 +0000 UTC m=+38.452837538"
	Oct 29 09:33:21 old-k8s-version-162751 kubelet[774]: I1029 09:33:21.136112     774 scope.go:117] "RemoveContainer" containerID="4695eb8d040b2c4ba3a0c50706e8639ec40c46214d5ddd8f4f8573e13a500182"
	Oct 29 09:33:21 old-k8s-version-162751 kubelet[774]: I1029 09:33:21.319354     774 scope.go:117] "RemoveContainer" containerID="4695eb8d040b2c4ba3a0c50706e8639ec40c46214d5ddd8f4f8573e13a500182"
	Oct 29 09:33:21 old-k8s-version-162751 kubelet[774]: I1029 09:33:21.319932     774 scope.go:117] "RemoveContainer" containerID="09a02eb7dd887de0741e25b4b79c14c2fd3e8f09ad895116c5f8a75ad2bc567c"
	Oct 29 09:33:21 old-k8s-version-162751 kubelet[774]: E1029 09:33:21.320418     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f4h82_kubernetes-dashboard(18b22fc8-08c6-4108-ad39-49635e52ab91)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f4h82" podUID="18b22fc8-08c6-4108-ad39-49635e52ab91"
	Oct 29 09:33:29 old-k8s-version-162751 kubelet[774]: I1029 09:33:29.938066     774 scope.go:117] "RemoveContainer" containerID="09a02eb7dd887de0741e25b4b79c14c2fd3e8f09ad895116c5f8a75ad2bc567c"
	Oct 29 09:33:29 old-k8s-version-162751 kubelet[774]: E1029 09:33:29.938827     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f4h82_kubernetes-dashboard(18b22fc8-08c6-4108-ad39-49635e52ab91)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f4h82" podUID="18b22fc8-08c6-4108-ad39-49635e52ab91"
	Oct 29 09:33:35 old-k8s-version-162751 kubelet[774]: I1029 09:33:35.805464     774 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 29 09:33:35 old-k8s-version-162751 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 29 09:33:35 old-k8s-version-162751 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 29 09:33:35 old-k8s-version-162751 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [7cc19ed872138312e19fcf79fd56294e7666859c5fa415c4222ecb63f7ac594a] <==
	2025/10/29 09:33:09 Using namespace: kubernetes-dashboard
	2025/10/29 09:33:09 Using in-cluster config to connect to apiserver
	2025/10/29 09:33:09 Using secret token for csrf signing
	2025/10/29 09:33:09 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/29 09:33:09 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/29 09:33:09 Successful initial request to the apiserver, version: v1.28.0
	2025/10/29 09:33:09 Generating JWE encryption key
	2025/10/29 09:33:09 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/29 09:33:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/29 09:33:09 Initializing JWE encryption key from synchronized object
	2025/10/29 09:33:09 Creating in-cluster Sidecar client
	2025/10/29 09:33:09 Serving insecurely on HTTP port: 9090
	2025/10/29 09:33:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/29 09:33:09 Starting overwatch
	
	
	==> storage-provisioner [402b270dff1ce36c60626612f013ab04776b9d0049122dd1fc5aa0d5c98c2b9b] <==
	I1029 09:33:19.352787       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1029 09:33:19.367517       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1029 09:33:19.367568       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1029 09:33:36.765087       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1029 09:33:36.765352       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-162751_b0b6b5e0-ceca-4958-b59c-5e1402bd5167!
	I1029 09:33:36.766129       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"99a84a4b-3609-4d3c-a5d7-cfe05ff94030", APIVersion:"v1", ResourceVersion:"621", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-162751_b0b6b5e0-ceca-4958-b59c-5e1402bd5167 became leader
	I1029 09:33:36.869125       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-162751_b0b6b5e0-ceca-4958-b59c-5e1402bd5167!
	
	
	==> storage-provisioner [6d708d6e42dc5a9946f22d32d03679cc175c450447d9212eb86a65c47fc6a6af] <==
	I1029 09:32:48.604682       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1029 09:33:18.608675       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-162751 -n old-k8s-version-162751
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-162751 -n old-k8s-version-162751: exit status 2 (346.153346ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-162751 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-162751
helpers_test.go:243: (dbg) docker inspect old-k8s-version-162751:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ff565e88a53d8edb5070c5f93c9d1fefa68d9886105b4ed571d8aace0997aad2",
	        "Created": "2025-10-29T09:31:17.309145207Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 185023,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:32:34.183234275Z",
	            "FinishedAt": "2025-10-29T09:32:33.368426646Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/ff565e88a53d8edb5070c5f93c9d1fefa68d9886105b4ed571d8aace0997aad2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ff565e88a53d8edb5070c5f93c9d1fefa68d9886105b4ed571d8aace0997aad2/hostname",
	        "HostsPath": "/var/lib/docker/containers/ff565e88a53d8edb5070c5f93c9d1fefa68d9886105b4ed571d8aace0997aad2/hosts",
	        "LogPath": "/var/lib/docker/containers/ff565e88a53d8edb5070c5f93c9d1fefa68d9886105b4ed571d8aace0997aad2/ff565e88a53d8edb5070c5f93c9d1fefa68d9886105b4ed571d8aace0997aad2-json.log",
	        "Name": "/old-k8s-version-162751",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-162751:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-162751",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ff565e88a53d8edb5070c5f93c9d1fefa68d9886105b4ed571d8aace0997aad2",
	                "LowerDir": "/var/lib/docker/overlay2/04ad89da0567c27cf19c3a878c1a373075d3240512b0417dad3b82758bcec18e-init/diff:/var/lib/docker/overlay2/512c003c31c6c889d7180677d0a7d04f9641d651bb7c0f829b95ca7e47c2836c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/04ad89da0567c27cf19c3a878c1a373075d3240512b0417dad3b82758bcec18e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/04ad89da0567c27cf19c3a878c1a373075d3240512b0417dad3b82758bcec18e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/04ad89da0567c27cf19c3a878c1a373075d3240512b0417dad3b82758bcec18e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-162751",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-162751/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-162751",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-162751",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-162751",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "94b206aea933a7a380a8c1275c31a4039b67d22639ed7e5e86bbd757be0b118e",
	            "SandboxKey": "/var/run/docker/netns/94b206aea933",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33048"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33050"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-162751": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:bb:58:3a:d5:87",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b39d4ca145f787b1920f94a4f3933ceac95f90f60a1cf8cbdf99d14ff53419fa",
	                    "EndpointID": "e595ce90cf8c664afa924ce4b7be34561fbedaa2c755ba1aa3f56c862bdd6a05",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-162751",
	                        "ff565e88a53d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-162751 -n old-k8s-version-162751
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-162751 -n old-k8s-version-162751: exit status 2 (359.585493ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-162751 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-162751 logs -n 25: (1.459788308s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-937200 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo containerd config dump                                                                                                                                                                                                  │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo crio config                                                                                                                                                                                                             │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ delete  │ -p cilium-937200                                                                                                                                                                                                                              │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │ 29 Oct 25 09:29 UTC │
	│ start   │ -p cert-expiration-690444 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-690444   │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │ 29 Oct 25 09:30 UTC │
	│ delete  │ -p force-systemd-env-116185                                                                                                                                                                                                                   │ force-systemd-env-116185 │ jenkins │ v1.37.0 │ 29 Oct 25 09:30 UTC │ 29 Oct 25 09:30 UTC │
	│ start   │ -p cert-options-699236 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-699236      │ jenkins │ v1.37.0 │ 29 Oct 25 09:30 UTC │ 29 Oct 25 09:31 UTC │
	│ ssh     │ cert-options-699236 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-699236      │ jenkins │ v1.37.0 │ 29 Oct 25 09:31 UTC │ 29 Oct 25 09:31 UTC │
	│ ssh     │ -p cert-options-699236 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-699236      │ jenkins │ v1.37.0 │ 29 Oct 25 09:31 UTC │ 29 Oct 25 09:31 UTC │
	│ delete  │ -p cert-options-699236                                                                                                                                                                                                                        │ cert-options-699236      │ jenkins │ v1.37.0 │ 29 Oct 25 09:31 UTC │ 29 Oct 25 09:31 UTC │
	│ start   │ -p old-k8s-version-162751 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:31 UTC │ 29 Oct 25 09:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-162751 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:32 UTC │                     │
	│ stop    │ -p old-k8s-version-162751 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:32 UTC │ 29 Oct 25 09:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-162751 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:32 UTC │ 29 Oct 25 09:32 UTC │
	│ start   │ -p old-k8s-version-162751 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:32 UTC │ 29 Oct 25 09:33 UTC │
	│ start   │ -p cert-expiration-690444 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-690444   │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │                     │
	│ image   │ old-k8s-version-162751 image list --format=json                                                                                                                                                                                               │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ pause   │ -p old-k8s-version-162751 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:33:33
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:33:33.819085  187060 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:33:33.819202  187060 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:33:33.819207  187060 out.go:374] Setting ErrFile to fd 2...
	I1029 09:33:33.819211  187060 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:33:33.819472  187060 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 09:33:33.819848  187060 out.go:368] Setting JSON to false
	I1029 09:33:33.821106  187060 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4566,"bootTime":1761725848,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1029 09:33:33.821168  187060 start.go:143] virtualization:  
	I1029 09:33:33.824833  187060 out.go:179] * [cert-expiration-690444] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1029 09:33:33.828583  187060 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:33:33.828721  187060 notify.go:221] Checking for updates...
	I1029 09:33:33.834811  187060 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:33:33.837749  187060 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:33:33.840582  187060 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	I1029 09:33:33.843499  187060 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1029 09:33:33.846408  187060 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:33:33.849932  187060 config.go:182] Loaded profile config "cert-expiration-690444": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:33:33.850610  187060 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:33:33.877203  187060 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1029 09:33:33.877301  187060 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:33:33.943824  187060 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-29 09:33:33.933616587 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:33:33.943912  187060 docker.go:319] overlay module found
	I1029 09:33:33.946988  187060 out.go:179] * Using the docker driver based on existing profile
	I1029 09:33:33.949766  187060 start.go:309] selected driver: docker
	I1029 09:33:33.949776  187060 start.go:930] validating driver "docker" against &{Name:cert-expiration-690444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-690444 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:33:33.949872  187060 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:33:33.950618  187060 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:33:34.020705  187060 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-29 09:33:34.010115636 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:33:34.021010  187060 cni.go:84] Creating CNI manager for ""
	I1029 09:33:34.021069  187060 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:33:34.021111  187060 start.go:353] cluster config:
	{Name:cert-expiration-690444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-690444 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1029 09:33:34.024256  187060 out.go:179] * Starting "cert-expiration-690444" primary control-plane node in "cert-expiration-690444" cluster
	I1029 09:33:34.027132  187060 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:33:34.030080  187060 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:33:34.032884  187060 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:33:34.032932  187060 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1029 09:33:34.032939  187060 cache.go:59] Caching tarball of preloaded images
	I1029 09:33:34.032939  187060 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:33:34.033017  187060 preload.go:233] Found /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1029 09:33:34.033027  187060 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 09:33:34.033133  187060 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/cert-expiration-690444/config.json ...
	I1029 09:33:34.053013  187060 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:33:34.053023  187060 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:33:34.053042  187060 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:33:34.053063  187060 start.go:360] acquireMachinesLock for cert-expiration-690444: {Name:mk45a13a7ff76a9410b822199a57af2cad65c665 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:33:34.053122  187060 start.go:364] duration metric: took 43.553µs to acquireMachinesLock for "cert-expiration-690444"
	I1029 09:33:34.053139  187060 start.go:96] Skipping create...Using existing machine configuration
	I1029 09:33:34.053144  187060 fix.go:54] fixHost starting: 
	I1029 09:33:34.053401  187060 cli_runner.go:164] Run: docker container inspect cert-expiration-690444 --format={{.State.Status}}
	I1029 09:33:34.071065  187060 fix.go:112] recreateIfNeeded on cert-expiration-690444: state=Running err=<nil>
	W1029 09:33:34.071085  187060 fix.go:138] unexpected machine state, will restart: <nil>
	I1029 09:33:34.076294  187060 out.go:252] * Updating the running docker "cert-expiration-690444" container ...
	I1029 09:33:34.076357  187060 machine.go:94] provisionDockerMachine start ...
	I1029 09:33:34.076437  187060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-690444
	I1029 09:33:34.103057  187060 main.go:143] libmachine: Using SSH client type: native
	I1029 09:33:34.103376  187060 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I1029 09:33:34.103382  187060 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 09:33:34.260682  187060 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-690444
	
	I1029 09:33:34.260695  187060 ubuntu.go:182] provisioning hostname "cert-expiration-690444"
	I1029 09:33:34.260756  187060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-690444
	I1029 09:33:34.279316  187060 main.go:143] libmachine: Using SSH client type: native
	I1029 09:33:34.279609  187060 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I1029 09:33:34.279626  187060 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-expiration-690444 && echo "cert-expiration-690444" | sudo tee /etc/hostname
	I1029 09:33:34.441760  187060 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-690444
	
	I1029 09:33:34.441834  187060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-690444
	I1029 09:33:34.459764  187060 main.go:143] libmachine: Using SSH client type: native
	I1029 09:33:34.460071  187060 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I1029 09:33:34.460086  187060 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-690444' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-690444/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-690444' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 09:33:34.612933  187060 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 09:33:34.612948  187060 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-2763/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-2763/.minikube}
	I1029 09:33:34.612976  187060 ubuntu.go:190] setting up certificates
	I1029 09:33:34.613006  187060 provision.go:84] configureAuth start
	I1029 09:33:34.613087  187060 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-690444
	I1029 09:33:34.632517  187060 provision.go:143] copyHostCerts
	I1029 09:33:34.632595  187060 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem, removing ...
	I1029 09:33:34.632608  187060 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 09:33:34.632696  187060 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem (1082 bytes)
	I1029 09:33:34.632814  187060 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem, removing ...
	I1029 09:33:34.632827  187060 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 09:33:34.632861  187060 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem (1123 bytes)
	I1029 09:33:34.632974  187060 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem, removing ...
	I1029 09:33:34.632978  187060 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 09:33:34.633004  187060 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem (1679 bytes)
	I1029 09:33:34.633059  187060 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-690444 san=[127.0.0.1 192.168.85.2 cert-expiration-690444 localhost minikube]
	I1029 09:33:35.252732  187060 provision.go:177] copyRemoteCerts
	I1029 09:33:35.252790  187060 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 09:33:35.252831  187060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-690444
	I1029 09:33:35.288502  187060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/cert-expiration-690444/id_rsa Username:docker}
	I1029 09:33:35.397455  187060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 09:33:35.428755  187060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 09:33:35.448639  187060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1029 09:33:35.466666  187060 provision.go:87] duration metric: took 853.638709ms to configureAuth
	I1029 09:33:35.466682  187060 ubuntu.go:206] setting minikube options for container-runtime
	I1029 09:33:35.466861  187060 config.go:182] Loaded profile config "cert-expiration-690444": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:33:35.466964  187060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-690444
	I1029 09:33:35.486867  187060 main.go:143] libmachine: Using SSH client type: native
	I1029 09:33:35.487221  187060 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I1029 09:33:35.487233  187060 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	
	
	==> CRI-O <==
	Oct 29 09:33:21 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:21.139326049Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:33:21 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:21.147079689Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:33:21 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:21.149720893Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:33:21 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:21.166619668Z" level=info msg="Created container 09a02eb7dd887de0741e25b4b79c14c2fd3e8f09ad895116c5f8a75ad2bc567c: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f4h82/dashboard-metrics-scraper" id=894433a5-4856-484f-b896-1c3e49d2bc1e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:33:21 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:21.172304188Z" level=info msg="Starting container: 09a02eb7dd887de0741e25b4b79c14c2fd3e8f09ad895116c5f8a75ad2bc567c" id=b57d7587-f1b6-44a3-995e-aa88df073659 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:33:21 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:21.179726314Z" level=info msg="Started container" PID=1627 containerID=09a02eb7dd887de0741e25b4b79c14c2fd3e8f09ad895116c5f8a75ad2bc567c description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f4h82/dashboard-metrics-scraper id=b57d7587-f1b6-44a3-995e-aa88df073659 name=/runtime.v1.RuntimeService/StartContainer sandboxID=85f3b24f28dd19b39a4d86f915663fbb5fb643e137f5109cee2704d15ddc514d
	Oct 29 09:33:21 old-k8s-version-162751 conmon[1625]: conmon 09a02eb7dd887de0741e <ninfo>: container 1627 exited with status 1
	Oct 29 09:33:21 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:21.322068094Z" level=info msg="Removing container: 4695eb8d040b2c4ba3a0c50706e8639ec40c46214d5ddd8f4f8573e13a500182" id=f998ef5c-ad02-4024-bc90-58604a5184a9 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:33:21 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:21.336687811Z" level=info msg="Error loading conmon cgroup of container 4695eb8d040b2c4ba3a0c50706e8639ec40c46214d5ddd8f4f8573e13a500182: cgroup deleted" id=f998ef5c-ad02-4024-bc90-58604a5184a9 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:33:21 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:21.34384966Z" level=info msg="Removed container 4695eb8d040b2c4ba3a0c50706e8639ec40c46214d5ddd8f4f8573e13a500182: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f4h82/dashboard-metrics-scraper" id=f998ef5c-ad02-4024-bc90-58604a5184a9 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:33:28 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:28.864041534Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:33:28 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:28.867970987Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:33:28 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:28.868002618Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:33:28 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:28.868026577Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:33:28 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:28.871126911Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:33:28 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:28.871159912Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:33:28 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:28.871181533Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:33:28 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:28.874482139Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:33:28 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:28.874515157Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:33:28 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:28.874536605Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:33:28 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:28.877595133Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:33:28 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:28.877628734Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:33:28 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:28.877651167Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:33:28 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:28.881247784Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:33:28 old-k8s-version-162751 crio[649]: time="2025-10-29T09:33:28.881283156Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	09a02eb7dd887       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   2                   85f3b24f28dd1       dashboard-metrics-scraper-5f989dc9cf-f4h82       kubernetes-dashboard
	402b270dff1ce       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           21 seconds ago      Running             storage-provisioner         2                   19ef0b6dc7ed0       storage-provisioner                              kube-system
	7cc19ed872138       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   31 seconds ago      Running             kubernetes-dashboard        0                   aa06a2a05cc3f       kubernetes-dashboard-8694d4445c-dvv98            kubernetes-dashboard
	3535f67a8b8f5       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago      Running             busybox                     1                   1a206b5fc832c       busybox                                          default
	47dbb1a9d8df6       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           52 seconds ago      Running             coredns                     1                   5774f8817df38       coredns-5dd5756b68-dq48g                         kube-system
	6d708d6e42dc5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           52 seconds ago      Exited              storage-provisioner         1                   19ef0b6dc7ed0       storage-provisioner                              kube-system
	4b73cff8c02ce       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago      Running             kindnet-cni                 1                   61d1ed7b41d68       kindnet-2dggr                                    kube-system
	2caaaff66733a       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           52 seconds ago      Running             kube-proxy                  1                   73700817e8792       kube-proxy-zvr7g                                 kube-system
	b78acb0b4196d       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           59 seconds ago      Running             etcd                        1                   b5185cf796496       etcd-old-k8s-version-162751                      kube-system
	a5366971dd2d5       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           59 seconds ago      Running             kube-controller-manager     1                   3e69437ec77b2       kube-controller-manager-old-k8s-version-162751   kube-system
	85f50a83501bd       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           59 seconds ago      Running             kube-scheduler              1                   cc5a80da84aba       kube-scheduler-old-k8s-version-162751            kube-system
	15deeb92de479       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           59 seconds ago      Running             kube-apiserver              1                   44e0c88c1a410       kube-apiserver-old-k8s-version-162751            kube-system
	
	
	==> coredns [47dbb1a9d8df6448d47893f3a3717f32a5db0b3f6ef22f1cd8df505a4683dc91] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35346 - 4927 "HINFO IN 2646169345322118318.2806814514088548949. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.005398166s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-162751
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-162751
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=old-k8s-version-162751
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_31_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:31:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-162751
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:33:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:33:17 +0000   Wed, 29 Oct 2025 09:31:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:33:17 +0000   Wed, 29 Oct 2025 09:31:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:33:17 +0000   Wed, 29 Oct 2025 09:31:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 09:33:17 +0000   Wed, 29 Oct 2025 09:32:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-162751
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                fe615db9-32dc-431b-8163-4556fb5b38ef
	  Boot ID:                    dcb5a4aa-84b0-4edf-aeb7-a96ccf6c0882
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-5dd5756b68-dq48g                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     107s
	  kube-system                 etcd-old-k8s-version-162751                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         119s
	  kube-system                 kindnet-2dggr                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-old-k8s-version-162751             250m (12%)    0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-old-k8s-version-162751    200m (10%)    0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-zvr7g                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-old-k8s-version-162751             100m (5%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-f4h82        0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-dvv98             0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 106s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m                 kubelet          Node old-k8s-version-162751 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m                 kubelet          Node old-k8s-version-162751 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m                 kubelet          Node old-k8s-version-162751 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s               node-controller  Node old-k8s-version-162751 event: Registered Node old-k8s-version-162751 in Controller
	  Normal  NodeReady                93s                kubelet          Node old-k8s-version-162751 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node old-k8s-version-162751 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node old-k8s-version-162751 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node old-k8s-version-162751 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           42s                node-controller  Node old-k8s-version-162751 event: Registered Node old-k8s-version-162751 in Controller
	
	
	==> dmesg <==
	[Oct29 09:04] overlayfs: idmapped layers are currently not supported
	[Oct29 09:05] overlayfs: idmapped layers are currently not supported
	[Oct29 09:06] overlayfs: idmapped layers are currently not supported
	[Oct29 09:07] overlayfs: idmapped layers are currently not supported
	[Oct29 09:08] overlayfs: idmapped layers are currently not supported
	[Oct29 09:10] overlayfs: idmapped layers are currently not supported
	[ +24.018500] overlayfs: idmapped layers are currently not supported
	[  +4.070732] overlayfs: idmapped layers are currently not supported
	[Oct29 09:11] overlayfs: idmapped layers are currently not supported
	[ +18.424492] overlayfs: idmapped layers are currently not supported
	[  +4.342269] hrtimer: interrupt took 2289025 ns
	[Oct29 09:12] overlayfs: idmapped layers are currently not supported
	[Oct29 09:13] overlayfs: idmapped layers are currently not supported
	[Oct29 09:14] overlayfs: idmapped layers are currently not supported
	[Oct29 09:20] overlayfs: idmapped layers are currently not supported
	[Oct29 09:23] overlayfs: idmapped layers are currently not supported
	[Oct29 09:24] overlayfs: idmapped layers are currently not supported
	[ +30.917844] overlayfs: idmapped layers are currently not supported
	[Oct29 09:27] overlayfs: idmapped layers are currently not supported
	[Oct29 09:29] overlayfs: idmapped layers are currently not supported
	[Oct29 09:30] overlayfs: idmapped layers are currently not supported
	[  +5.608805] overlayfs: idmapped layers are currently not supported
	[ +37.422429] overlayfs: idmapped layers are currently not supported
	[Oct29 09:31] overlayfs: idmapped layers are currently not supported
	[Oct29 09:32] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b78acb0b4196df036f132bb8dbe1317e4d47239b19065d5c77f8dbaf30d95978] <==
	{"level":"info","ts":"2025-10-29T09:32:42.414721Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-29T09:32:42.414742Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-29T09:32:42.415049Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-29T09:32:42.415131Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-29T09:32:42.415239Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-29T09:32:42.415265Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-29T09:32:42.449448Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-29T09:32:42.449605Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-29T09:32:42.449616Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-29T09:32:42.45657Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-29T09:32:42.45652Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-29T09:32:43.715948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-29T09:32:43.716264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-29T09:32:43.716513Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-29T09:32:43.716708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-29T09:32:43.716746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-29T09:32:43.716793Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-29T09:32:43.716829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-29T09:32:43.727724Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-162751 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-29T09:32:43.727973Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-29T09:32:43.732503Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-29T09:32:43.733575Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-29T09:32:43.736595Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-29T09:32:43.73711Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-29T09:32:43.742294Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 09:33:41 up  1:16,  0 user,  load average: 1.85, 3.13, 2.51
	Linux old-k8s-version-162751 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4b73cff8c02ceab1c3bbb9d9b208c88f66a612ab8519eaf85d5e65cd9bf0e4b8] <==
	I1029 09:32:48.661945       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:32:48.662199       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1029 09:32:48.662324       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:32:48.662336       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:32:48.662348       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:32:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:32:48.858435       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:32:48.858451       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:32:48.858459       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:32:48.859118       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1029 09:33:18.858614       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1029 09:33:18.858747       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1029 09:33:18.858853       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1029 09:33:18.860712       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1029 09:33:20.159545       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 09:33:20.159620       1 metrics.go:72] Registering metrics
	I1029 09:33:20.159702       1 controller.go:711] "Syncing nftables rules"
	I1029 09:33:28.863714       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1029 09:33:28.863749       1 main.go:301] handling current node
	I1029 09:33:38.864532       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1029 09:33:38.864567       1 main.go:301] handling current node
	
	
	==> kube-apiserver [15deeb92de4799a5896e0b1d2bb95ad8660db0e8da65e42390544d6bec6b7088] <==
	I1029 09:32:47.127875       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1029 09:32:47.128190       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1029 09:32:47.169014       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1029 09:32:47.169357       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1029 09:32:47.185869       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1029 09:32:47.186142       1 shared_informer.go:318] Caches are synced for configmaps
	I1029 09:32:47.191097       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1029 09:32:47.191248       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1029 09:32:47.195015       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1029 09:32:47.197423       1 aggregator.go:166] initial CRD sync complete...
	I1029 09:32:47.197639       1 autoregister_controller.go:141] Starting autoregister controller
	I1029 09:32:47.197679       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1029 09:32:47.197726       1 cache.go:39] Caches are synced for autoregister controller
	E1029 09:32:47.259144       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1029 09:32:47.777639       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:32:49.005726       1 controller.go:624] quota admission added evaluator for: namespaces
	I1029 09:32:49.065941       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1029 09:32:49.101285       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:32:49.132577       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 09:32:49.145194       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1029 09:32:49.264775       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.81.224"}
	I1029 09:32:49.298106       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.109.241"}
	I1029 09:32:59.442547       1 controller.go:624] quota admission added evaluator for: endpoints
	I1029 09:32:59.464506       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1029 09:32:59.505743       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [a5366971dd2d52fc09c0ee8faad87d9d554996df31f7e1674b9d9b415dce9d79] <==
	I1029 09:32:59.589491       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="23.165797ms"
	I1029 09:32:59.589665       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="51.48µs"
	I1029 09:32:59.597224       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-f4h82"
	I1029 09:32:59.598546       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1029 09:32:59.610510       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-dvv98"
	I1029 09:32:59.625557       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="59.195865ms"
	I1029 09:32:59.653038       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="86.452667ms"
	I1029 09:32:59.676426       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.803491ms"
	I1029 09:32:59.676926       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="74.208µs"
	I1029 09:32:59.692759       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="79.23µs"
	I1029 09:32:59.719142       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="65.942408ms"
	I1029 09:32:59.720161       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="42.125µs"
	I1029 09:32:59.737970       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="48.706µs"
	I1029 09:32:59.944725       1 shared_informer.go:318] Caches are synced for garbage collector
	I1029 09:32:59.944753       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1029 09:32:59.973265       1 shared_informer.go:318] Caches are synced for garbage collector
	I1029 09:33:05.284244       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="40.279µs"
	I1029 09:33:06.298961       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="111.739µs"
	I1029 09:33:07.305719       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.13µs"
	I1029 09:33:09.311166       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="8.717299ms"
	I1029 09:33:09.311966       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="61.745µs"
	I1029 09:33:21.342121       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.352µs"
	I1029 09:33:21.867917       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.26266ms"
	I1029 09:33:21.868182       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="86.926µs"
	I1029 09:33:29.952859       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.664µs"
	
	
	==> kube-proxy [2caaaff66733a607dc3dcf0a9fda574cba6e68a7ed1972b5ba272c9ebca233b9] <==
	I1029 09:32:48.729441       1 server_others.go:69] "Using iptables proxy"
	I1029 09:32:48.752384       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1029 09:32:48.809284       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:32:48.814981       1 server_others.go:152] "Using iptables Proxier"
	I1029 09:32:48.815097       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1029 09:32:48.815135       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1029 09:32:48.815217       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1029 09:32:48.815597       1 server.go:846] "Version info" version="v1.28.0"
	I1029 09:32:48.815998       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:32:48.817811       1 config.go:188] "Starting service config controller"
	I1029 09:32:48.817982       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1029 09:32:48.818036       1 config.go:97] "Starting endpoint slice config controller"
	I1029 09:32:48.818063       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1029 09:32:48.820401       1 config.go:315] "Starting node config controller"
	I1029 09:32:48.820497       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1029 09:32:48.918801       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1029 09:32:48.918853       1 shared_informer.go:318] Caches are synced for service config
	I1029 09:32:48.920640       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [85f50a83501bd8c007c1d4b5360ff663d8311adaae8d6d89173f2b09d0a448dc] <==
	I1029 09:32:45.056199       1 serving.go:348] Generated self-signed cert in-memory
	W1029 09:32:46.964672       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1029 09:32:46.964764       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1029 09:32:46.964797       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1029 09:32:46.964830       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1029 09:32:47.058140       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1029 09:32:47.058251       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:32:47.060143       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1029 09:32:47.068648       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:32:47.068756       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1029 09:32:47.068800       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W1029 09:32:47.111753       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1029 09:32:47.111826       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1029 09:32:47.140050       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1029 09:32:47.140162       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1029 09:32:47.140504       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1029 09:32:47.140706       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1029 09:32:47.140624       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1029 09:32:47.140834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1029 09:32:47.140675       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1029 09:32:47.140922       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1029 09:32:48.369404       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 29 09:32:59 old-k8s-version-162751 kubelet[774]: I1029 09:32:59.736269     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khh9j\" (UniqueName: \"kubernetes.io/projected/18b22fc8-08c6-4108-ad39-49635e52ab91-kube-api-access-khh9j\") pod \"dashboard-metrics-scraper-5f989dc9cf-f4h82\" (UID: \"18b22fc8-08c6-4108-ad39-49635e52ab91\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f4h82"
	Oct 29 09:32:59 old-k8s-version-162751 kubelet[774]: I1029 09:32:59.736515     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwpqp\" (UniqueName: \"kubernetes.io/projected/7c0cb30a-8153-4136-80e4-1c87bbec948c-kube-api-access-vwpqp\") pod \"kubernetes-dashboard-8694d4445c-dvv98\" (UID: \"7c0cb30a-8153-4136-80e4-1c87bbec948c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dvv98"
	Oct 29 09:32:59 old-k8s-version-162751 kubelet[774]: I1029 09:32:59.736660     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7c0cb30a-8153-4136-80e4-1c87bbec948c-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-dvv98\" (UID: \"7c0cb30a-8153-4136-80e4-1c87bbec948c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dvv98"
	Oct 29 09:32:59 old-k8s-version-162751 kubelet[774]: W1029 09:32:59.958813     774 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ff565e88a53d8edb5070c5f93c9d1fefa68d9886105b4ed571d8aace0997aad2/crio-85f3b24f28dd19b39a4d86f915663fbb5fb643e137f5109cee2704d15ddc514d WatchSource:0}: Error finding container 85f3b24f28dd19b39a4d86f915663fbb5fb643e137f5109cee2704d15ddc514d: Status 404 returned error can't find the container with id 85f3b24f28dd19b39a4d86f915663fbb5fb643e137f5109cee2704d15ddc514d
	Oct 29 09:32:59 old-k8s-version-162751 kubelet[774]: W1029 09:32:59.992722     774 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ff565e88a53d8edb5070c5f93c9d1fefa68d9886105b4ed571d8aace0997aad2/crio-aa06a2a05cc3f42e94196b9e06881c1124223752566e801e7b7f4eb52e4a2ee2 WatchSource:0}: Error finding container aa06a2a05cc3f42e94196b9e06881c1124223752566e801e7b7f4eb52e4a2ee2: Status 404 returned error can't find the container with id aa06a2a05cc3f42e94196b9e06881c1124223752566e801e7b7f4eb52e4a2ee2
	Oct 29 09:33:05 old-k8s-version-162751 kubelet[774]: I1029 09:33:05.266167     774 scope.go:117] "RemoveContainer" containerID="fe08021aff09fbdea0f0a3cbae40c98dea9ab6e390cd38cdffa96652fdf38082"
	Oct 29 09:33:06 old-k8s-version-162751 kubelet[774]: I1029 09:33:06.277031     774 scope.go:117] "RemoveContainer" containerID="fe08021aff09fbdea0f0a3cbae40c98dea9ab6e390cd38cdffa96652fdf38082"
	Oct 29 09:33:06 old-k8s-version-162751 kubelet[774]: I1029 09:33:06.277353     774 scope.go:117] "RemoveContainer" containerID="4695eb8d040b2c4ba3a0c50706e8639ec40c46214d5ddd8f4f8573e13a500182"
	Oct 29 09:33:06 old-k8s-version-162751 kubelet[774]: E1029 09:33:06.277728     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f4h82_kubernetes-dashboard(18b22fc8-08c6-4108-ad39-49635e52ab91)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f4h82" podUID="18b22fc8-08c6-4108-ad39-49635e52ab91"
	Oct 29 09:33:07 old-k8s-version-162751 kubelet[774]: I1029 09:33:07.282464     774 scope.go:117] "RemoveContainer" containerID="4695eb8d040b2c4ba3a0c50706e8639ec40c46214d5ddd8f4f8573e13a500182"
	Oct 29 09:33:07 old-k8s-version-162751 kubelet[774]: E1029 09:33:07.282747     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f4h82_kubernetes-dashboard(18b22fc8-08c6-4108-ad39-49635e52ab91)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f4h82" podUID="18b22fc8-08c6-4108-ad39-49635e52ab91"
	Oct 29 09:33:09 old-k8s-version-162751 kubelet[774]: I1029 09:33:09.938442     774 scope.go:117] "RemoveContainer" containerID="4695eb8d040b2c4ba3a0c50706e8639ec40c46214d5ddd8f4f8573e13a500182"
	Oct 29 09:33:09 old-k8s-version-162751 kubelet[774]: E1029 09:33:09.938759     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f4h82_kubernetes-dashboard(18b22fc8-08c6-4108-ad39-49635e52ab91)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f4h82" podUID="18b22fc8-08c6-4108-ad39-49635e52ab91"
	Oct 29 09:33:19 old-k8s-version-162751 kubelet[774]: I1029 09:33:19.309964     774 scope.go:117] "RemoveContainer" containerID="6d708d6e42dc5a9946f22d32d03679cc175c450447d9212eb86a65c47fc6a6af"
	Oct 29 09:33:19 old-k8s-version-162751 kubelet[774]: I1029 09:33:19.338830     774 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dvv98" podStartSLOduration=11.123512307 podCreationTimestamp="2025-10-29 09:32:59 +0000 UTC" firstStartedPulling="2025-10-29 09:32:59.995376607 +0000 UTC m=+19.110538221" lastFinishedPulling="2025-10-29 09:33:09.209540224 +0000 UTC m=+28.324701838" observedRunningTime="2025-10-29 09:33:09.302133645 +0000 UTC m=+28.417295267" watchObservedRunningTime="2025-10-29 09:33:19.337675924 +0000 UTC m=+38.452837538"
	Oct 29 09:33:21 old-k8s-version-162751 kubelet[774]: I1029 09:33:21.136112     774 scope.go:117] "RemoveContainer" containerID="4695eb8d040b2c4ba3a0c50706e8639ec40c46214d5ddd8f4f8573e13a500182"
	Oct 29 09:33:21 old-k8s-version-162751 kubelet[774]: I1029 09:33:21.319354     774 scope.go:117] "RemoveContainer" containerID="4695eb8d040b2c4ba3a0c50706e8639ec40c46214d5ddd8f4f8573e13a500182"
	Oct 29 09:33:21 old-k8s-version-162751 kubelet[774]: I1029 09:33:21.319932     774 scope.go:117] "RemoveContainer" containerID="09a02eb7dd887de0741e25b4b79c14c2fd3e8f09ad895116c5f8a75ad2bc567c"
	Oct 29 09:33:21 old-k8s-version-162751 kubelet[774]: E1029 09:33:21.320418     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f4h82_kubernetes-dashboard(18b22fc8-08c6-4108-ad39-49635e52ab91)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f4h82" podUID="18b22fc8-08c6-4108-ad39-49635e52ab91"
	Oct 29 09:33:29 old-k8s-version-162751 kubelet[774]: I1029 09:33:29.938066     774 scope.go:117] "RemoveContainer" containerID="09a02eb7dd887de0741e25b4b79c14c2fd3e8f09ad895116c5f8a75ad2bc567c"
	Oct 29 09:33:29 old-k8s-version-162751 kubelet[774]: E1029 09:33:29.938827     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f4h82_kubernetes-dashboard(18b22fc8-08c6-4108-ad39-49635e52ab91)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f4h82" podUID="18b22fc8-08c6-4108-ad39-49635e52ab91"
	Oct 29 09:33:35 old-k8s-version-162751 kubelet[774]: I1029 09:33:35.805464     774 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 29 09:33:35 old-k8s-version-162751 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 29 09:33:35 old-k8s-version-162751 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 29 09:33:35 old-k8s-version-162751 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [7cc19ed872138312e19fcf79fd56294e7666859c5fa415c4222ecb63f7ac594a] <==
	2025/10/29 09:33:09 Using namespace: kubernetes-dashboard
	2025/10/29 09:33:09 Using in-cluster config to connect to apiserver
	2025/10/29 09:33:09 Using secret token for csrf signing
	2025/10/29 09:33:09 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/29 09:33:09 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/29 09:33:09 Successful initial request to the apiserver, version: v1.28.0
	2025/10/29 09:33:09 Generating JWE encryption key
	2025/10/29 09:33:09 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/29 09:33:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/29 09:33:09 Initializing JWE encryption key from synchronized object
	2025/10/29 09:33:09 Creating in-cluster Sidecar client
	2025/10/29 09:33:09 Serving insecurely on HTTP port: 9090
	2025/10/29 09:33:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/29 09:33:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/29 09:33:09 Starting overwatch
	
	
	==> storage-provisioner [402b270dff1ce36c60626612f013ab04776b9d0049122dd1fc5aa0d5c98c2b9b] <==
	I1029 09:33:19.352787       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1029 09:33:19.367517       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1029 09:33:19.367568       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1029 09:33:36.765087       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1029 09:33:36.765352       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-162751_b0b6b5e0-ceca-4958-b59c-5e1402bd5167!
	I1029 09:33:36.766129       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"99a84a4b-3609-4d3c-a5d7-cfe05ff94030", APIVersion:"v1", ResourceVersion:"621", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-162751_b0b6b5e0-ceca-4958-b59c-5e1402bd5167 became leader
	I1029 09:33:36.869125       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-162751_b0b6b5e0-ceca-4958-b59c-5e1402bd5167!
	
	
	==> storage-provisioner [6d708d6e42dc5a9946f22d32d03679cc175c450447d9212eb86a65c47fc6a6af] <==
	I1029 09:32:48.604682       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1029 09:33:18.608675       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-162751 -n old-k8s-version-162751
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-162751 -n old-k8s-version-162751: exit status 2 (506.793317ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-162751 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (7.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-505993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-505993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (311.012347ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:35:12Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-505993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-505993 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-505993 describe deploy/metrics-server -n kube-system: exit status 1 (92.098178ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-505993 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-505993
helpers_test.go:243: (dbg) docker inspect no-preload-505993:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d63baf692038c6771a74322ac553fcd847fb56f516f845444c48dff9a54d272a",
	        "Created": "2025-10-29T09:33:49.110598267Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 189803,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:33:49.422714559Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/d63baf692038c6771a74322ac553fcd847fb56f516f845444c48dff9a54d272a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d63baf692038c6771a74322ac553fcd847fb56f516f845444c48dff9a54d272a/hostname",
	        "HostsPath": "/var/lib/docker/containers/d63baf692038c6771a74322ac553fcd847fb56f516f845444c48dff9a54d272a/hosts",
	        "LogPath": "/var/lib/docker/containers/d63baf692038c6771a74322ac553fcd847fb56f516f845444c48dff9a54d272a/d63baf692038c6771a74322ac553fcd847fb56f516f845444c48dff9a54d272a-json.log",
	        "Name": "/no-preload-505993",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-505993:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-505993",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d63baf692038c6771a74322ac553fcd847fb56f516f845444c48dff9a54d272a",
	                "LowerDir": "/var/lib/docker/overlay2/b0823108135d7c7891d0d8e0e0ee4954f318020c6f85c95a7b1c176cc8aeeabf-init/diff:/var/lib/docker/overlay2/512c003c31c6c889d7180677d0a7d04f9641d651bb7c0f829b95ca7e47c2836c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b0823108135d7c7891d0d8e0e0ee4954f318020c6f85c95a7b1c176cc8aeeabf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b0823108135d7c7891d0d8e0e0ee4954f318020c6f85c95a7b1c176cc8aeeabf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b0823108135d7c7891d0d8e0e0ee4954f318020c6f85c95a7b1c176cc8aeeabf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-505993",
	                "Source": "/var/lib/docker/volumes/no-preload-505993/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-505993",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-505993",
	                "name.minikube.sigs.k8s.io": "no-preload-505993",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0f298e513d4a3783ef07f09389c0f01be83c912a0513cd94ae71a04ccc7113cb",
	            "SandboxKey": "/var/run/docker/netns/0f298e513d4a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-505993": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:be:dc:6e:2d:82",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3147a87e4d57838736bbe9648b553b17f7ec6f1da903b525594523d0b3c2da78",
	                    "EndpointID": "0afc9262783b9c9f6d7e4a22488d0a11fd66be02b6204b79204cf231a5fbcb76",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-505993",
	                        "d63baf692038"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-505993 -n no-preload-505993
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-505993 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-505993 logs -n 25: (1.219925489s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-937200 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-937200 sudo crio config                                                                                                                                                                                                             │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │                     │
	│ delete  │ -p cilium-937200                                                                                                                                                                                                                              │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │ 29 Oct 25 09:29 UTC │
	│ start   │ -p cert-expiration-690444 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-690444   │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │ 29 Oct 25 09:30 UTC │
	│ delete  │ -p force-systemd-env-116185                                                                                                                                                                                                                   │ force-systemd-env-116185 │ jenkins │ v1.37.0 │ 29 Oct 25 09:30 UTC │ 29 Oct 25 09:30 UTC │
	│ start   │ -p cert-options-699236 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-699236      │ jenkins │ v1.37.0 │ 29 Oct 25 09:30 UTC │ 29 Oct 25 09:31 UTC │
	│ ssh     │ cert-options-699236 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-699236      │ jenkins │ v1.37.0 │ 29 Oct 25 09:31 UTC │ 29 Oct 25 09:31 UTC │
	│ ssh     │ -p cert-options-699236 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-699236      │ jenkins │ v1.37.0 │ 29 Oct 25 09:31 UTC │ 29 Oct 25 09:31 UTC │
	│ delete  │ -p cert-options-699236                                                                                                                                                                                                                        │ cert-options-699236      │ jenkins │ v1.37.0 │ 29 Oct 25 09:31 UTC │ 29 Oct 25 09:31 UTC │
	│ start   │ -p old-k8s-version-162751 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:31 UTC │ 29 Oct 25 09:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-162751 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:32 UTC │                     │
	│ stop    │ -p old-k8s-version-162751 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:32 UTC │ 29 Oct 25 09:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-162751 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:32 UTC │ 29 Oct 25 09:32 UTC │
	│ start   │ -p old-k8s-version-162751 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:32 UTC │ 29 Oct 25 09:33 UTC │
	│ start   │ -p cert-expiration-690444 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-690444   │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ image   │ old-k8s-version-162751 image list --format=json                                                                                                                                                                                               │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ pause   │ -p old-k8s-version-162751 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │                     │
	│ delete  │ -p old-k8s-version-162751                                                                                                                                                                                                                     │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ delete  │ -p old-k8s-version-162751                                                                                                                                                                                                                     │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ start   │ -p no-preload-505993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-505993        │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:35 UTC │
	│ delete  │ -p cert-expiration-690444                                                                                                                                                                                                                     │ cert-expiration-690444   │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ start   │ -p embed-certs-946178 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-946178       │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-505993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-505993        │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:33:56
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:33:56.839616  191189 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:33:56.846552  191189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:33:56.846618  191189 out.go:374] Setting ErrFile to fd 2...
	I1029 09:33:56.846652  191189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:33:56.847106  191189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 09:33:56.848073  191189 out.go:368] Setting JSON to false
	I1029 09:33:56.848967  191189 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4589,"bootTime":1761725848,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1029 09:33:56.849031  191189 start.go:143] virtualization:  
	I1029 09:33:56.852262  191189 out.go:179] * [embed-certs-946178] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1029 09:33:56.856561  191189 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:33:56.856805  191189 notify.go:221] Checking for updates...
	I1029 09:33:56.863055  191189 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:33:56.865994  191189 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:33:56.868842  191189 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	I1029 09:33:56.871912  191189 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1029 09:33:56.874814  191189 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:33:54.643925  189343 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-505993
	
	I1029 09:33:54.643952  189343 ubuntu.go:182] provisioning hostname "no-preload-505993"
	I1029 09:33:54.644017  189343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-505993
	I1029 09:33:54.664666  189343 main.go:143] libmachine: Using SSH client type: native
	I1029 09:33:54.664982  189343 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1029 09:33:54.664999  189343 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-505993 && echo "no-preload-505993" | sudo tee /etc/hostname
	I1029 09:33:54.849069  189343 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-505993
	
	I1029 09:33:54.849146  189343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-505993
	I1029 09:33:54.870417  189343 main.go:143] libmachine: Using SSH client type: native
	I1029 09:33:54.870721  189343 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1029 09:33:54.870739  189343 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-505993' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-505993/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-505993' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 09:33:55.020861  189343 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 09:33:55.020894  189343 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-2763/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-2763/.minikube}
	I1029 09:33:55.020913  189343 ubuntu.go:190] setting up certificates
	I1029 09:33:55.020922  189343 provision.go:84] configureAuth start
	I1029 09:33:55.020989  189343 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-505993
	I1029 09:33:55.039487  189343 provision.go:143] copyHostCerts
	I1029 09:33:55.039559  189343 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem, removing ...
	I1029 09:33:55.039572  189343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 09:33:55.039663  189343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem (1082 bytes)
	I1029 09:33:55.039767  189343 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem, removing ...
	I1029 09:33:55.039778  189343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 09:33:55.039811  189343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem (1123 bytes)
	I1029 09:33:55.039876  189343 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem, removing ...
	I1029 09:33:55.039884  189343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 09:33:55.039909  189343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem (1679 bytes)
	I1029 09:33:55.039966  189343 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem org=jenkins.no-preload-505993 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-505993]
	I1029 09:33:55.362775  189343 provision.go:177] copyRemoteCerts
	I1029 09:33:55.362840  189343 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 09:33:55.362879  189343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-505993
	I1029 09:33:55.381027  189343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/no-preload-505993/id_rsa Username:docker}
	I1029 09:33:55.484166  189343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 09:33:55.502066  189343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1029 09:33:55.519507  189343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 09:33:55.537420  189343 provision.go:87] duration metric: took 516.483453ms to configureAuth
	I1029 09:33:55.537446  189343 ubuntu.go:206] setting minikube options for container-runtime
	I1029 09:33:55.537630  189343 config.go:182] Loaded profile config "no-preload-505993": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:33:55.537745  189343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-505993
	I1029 09:33:55.555115  189343 main.go:143] libmachine: Using SSH client type: native
	I1029 09:33:55.555459  189343 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1029 09:33:55.555482  189343 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 09:33:55.941628  189343 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 09:33:55.941655  189343 machine.go:97] duration metric: took 4.479637612s to provisionDockerMachine
	I1029 09:33:55.941666  189343 client.go:176] duration metric: took 8.587001161s to LocalClient.Create
	I1029 09:33:55.941681  189343 start.go:167] duration metric: took 8.587079932s to libmachine.API.Create "no-preload-505993"
	I1029 09:33:55.941689  189343 start.go:293] postStartSetup for "no-preload-505993" (driver="docker")
	I1029 09:33:55.941713  189343 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 09:33:55.941778  189343 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 09:33:55.941822  189343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-505993
	I1029 09:33:55.962421  189343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/no-preload-505993/id_rsa Username:docker}
	I1029 09:33:56.114397  189343 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 09:33:56.118199  189343 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 09:33:56.118233  189343 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 09:33:56.118245  189343 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/addons for local assets ...
	I1029 09:33:56.118326  189343 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/files for local assets ...
	I1029 09:33:56.118478  189343 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> 45502.pem in /etc/ssl/certs
	I1029 09:33:56.118677  189343 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 09:33:56.146044  189343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /etc/ssl/certs/45502.pem (1708 bytes)
	I1029 09:33:56.170993  189343 start.go:296] duration metric: took 229.289905ms for postStartSetup
	I1029 09:33:56.171369  189343 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-505993
	I1029 09:33:56.209981  189343 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/config.json ...
	I1029 09:33:56.210245  189343 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 09:33:56.210286  189343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-505993
	I1029 09:33:56.230353  189343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/no-preload-505993/id_rsa Username:docker}
	I1029 09:33:56.339140  189343 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 09:33:56.345114  189343 start.go:128] duration metric: took 8.994475781s to createHost
	I1029 09:33:56.345136  189343 start.go:83] releasing machines lock for "no-preload-505993", held for 8.994663114s
	I1029 09:33:56.345221  189343 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-505993
	I1029 09:33:56.369823  189343 ssh_runner.go:195] Run: cat /version.json
	I1029 09:33:56.369856  189343 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 09:33:56.369874  189343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-505993
	I1029 09:33:56.369925  189343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-505993
	I1029 09:33:56.387495  189343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/no-preload-505993/id_rsa Username:docker}
	I1029 09:33:56.414017  189343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/no-preload-505993/id_rsa Username:docker}
	I1029 09:33:56.609692  189343 ssh_runner.go:195] Run: systemctl --version
	I1029 09:33:56.616047  189343 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 09:33:56.654947  189343 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 09:33:56.659487  189343 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 09:33:56.659558  189343 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 09:33:56.704804  189343 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1029 09:33:56.704828  189343 start.go:496] detecting cgroup driver to use...
	I1029 09:33:56.704871  189343 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1029 09:33:56.704947  189343 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 09:33:56.734160  189343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 09:33:56.755604  189343 docker.go:218] disabling cri-docker service (if available) ...
	I1029 09:33:56.755662  189343 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 09:33:56.775903  189343 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 09:33:56.799230  189343 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 09:33:56.878455  191189 config.go:182] Loaded profile config "no-preload-505993": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:33:56.878549  191189 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:33:56.919911  191189 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1029 09:33:56.920086  191189 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:33:57.017004  191189 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2025-10-29 09:33:57.006911105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:33:57.017107  191189 docker.go:319] overlay module found
	I1029 09:33:57.023886  191189 out.go:179] * Using the docker driver based on user configuration
	I1029 09:33:57.026894  191189 start.go:309] selected driver: docker
	I1029 09:33:57.026915  191189 start.go:930] validating driver "docker" against <nil>
	I1029 09:33:57.026928  191189 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:33:57.027634  191189 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:33:57.124927  191189 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2025-10-29 09:33:57.115613074 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:33:57.125071  191189 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1029 09:33:57.125291  191189 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:33:57.128593  191189 out.go:179] * Using Docker driver with root privileges
	I1029 09:33:57.131388  191189 cni.go:84] Creating CNI manager for ""
	I1029 09:33:57.131452  191189 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:33:57.131461  191189 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1029 09:33:57.131545  191189 start.go:353] cluster config:
	{Name:embed-certs-946178 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-946178 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:33:57.134513  191189 out.go:179] * Starting "embed-certs-946178" primary control-plane node in "embed-certs-946178" cluster
	I1029 09:33:57.137259  191189 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:33:57.140113  191189 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:33:57.142952  191189 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:33:57.143023  191189 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1029 09:33:57.143036  191189 cache.go:59] Caching tarball of preloaded images
	I1029 09:33:57.143125  191189 preload.go:233] Found /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1029 09:33:57.143134  191189 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 09:33:57.143239  191189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/config.json ...
	I1029 09:33:57.143261  191189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/config.json: {Name:mkfbb3d7287f10f1588cce2cb95529da66e59984 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:33:57.143411  191189 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:33:57.166504  191189 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:33:57.166522  191189 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:33:57.166535  191189 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:33:57.166559  191189 start.go:360] acquireMachinesLock for embed-certs-946178: {Name:mk1c928a559dbc3bbce2e34d80593c51300c509b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:33:57.166651  191189 start.go:364] duration metric: took 77.555µs to acquireMachinesLock for "embed-certs-946178"
	I1029 09:33:57.166674  191189 start.go:93] Provisioning new machine with config: &{Name:embed-certs-946178 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-946178 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:33:57.166737  191189 start.go:125] createHost starting for "" (driver="docker")
	I1029 09:33:56.971079  189343 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 09:33:57.156926  189343 docker.go:234] disabling docker service ...
	I1029 09:33:57.156993  189343 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 09:33:57.186883  189343 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 09:33:57.202125  189343 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 09:33:57.378149  189343 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 09:33:57.576415  189343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 09:33:57.589971  189343 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 09:33:57.603940  189343 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 09:33:57.604007  189343 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:33:57.613725  189343 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1029 09:33:57.613797  189343 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:33:57.622984  189343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:33:57.632136  189343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:33:57.655231  189343 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 09:33:57.665310  189343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:33:57.678517  189343 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:33:57.702735  189343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:33:57.717435  189343 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 09:33:57.727170  189343 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 09:33:57.736642  189343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:33:57.903374  189343 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 09:33:58.136783  189343 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 09:33:58.136845  189343 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 09:33:58.141356  189343 start.go:564] Will wait 60s for crictl version
	I1029 09:33:58.141419  189343 ssh_runner.go:195] Run: which crictl
	I1029 09:33:58.145218  189343 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 09:33:58.188855  189343 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 09:33:58.188950  189343 ssh_runner.go:195] Run: crio --version
	I1029 09:33:58.222858  189343 ssh_runner.go:195] Run: crio --version
	I1029 09:33:58.259686  189343 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 09:33:57.170117  191189 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1029 09:33:57.170361  191189 start.go:159] libmachine.API.Create for "embed-certs-946178" (driver="docker")
	I1029 09:33:57.170385  191189 client.go:173] LocalClient.Create starting
	I1029 09:33:57.170450  191189 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem
	I1029 09:33:57.170479  191189 main.go:143] libmachine: Decoding PEM data...
	I1029 09:33:57.170492  191189 main.go:143] libmachine: Parsing certificate...
	I1029 09:33:57.170546  191189 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem
	I1029 09:33:57.170573  191189 main.go:143] libmachine: Decoding PEM data...
	I1029 09:33:57.170582  191189 main.go:143] libmachine: Parsing certificate...
	I1029 09:33:57.170935  191189 cli_runner.go:164] Run: docker network inspect embed-certs-946178 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1029 09:33:57.187804  191189 cli_runner.go:211] docker network inspect embed-certs-946178 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1029 09:33:57.187880  191189 network_create.go:284] running [docker network inspect embed-certs-946178] to gather additional debugging logs...
	I1029 09:33:57.187898  191189 cli_runner.go:164] Run: docker network inspect embed-certs-946178
	W1029 09:33:57.204955  191189 cli_runner.go:211] docker network inspect embed-certs-946178 returned with exit code 1
	I1029 09:33:57.204980  191189 network_create.go:287] error running [docker network inspect embed-certs-946178]: docker network inspect embed-certs-946178: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-946178 not found
	I1029 09:33:57.204999  191189 network_create.go:289] output of [docker network inspect embed-certs-946178]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-946178 not found
	
	** /stderr **
	I1029 09:33:57.205088  191189 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:33:57.224042  191189 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0687088684ea IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8e:e2:78:39:db:9c} reservation:<nil>}
	I1029 09:33:57.224552  191189 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b2a2304196dd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8e:c9:a9:e0:d0:7a} reservation:<nil>}
	I1029 09:33:57.224875  191189 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e863a0178057 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:86:09:fc:5e:55} reservation:<nil>}
	I1029 09:33:57.225118  191189 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3147a87e4d57 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:46:46:65:84:46:f3} reservation:<nil>}
	I1029 09:33:57.225504  191189 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a06d10}
	I1029 09:33:57.225520  191189 network_create.go:124] attempt to create docker network embed-certs-946178 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1029 09:33:57.225572  191189 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-946178 embed-certs-946178
	I1029 09:33:57.322250  191189 network_create.go:108] docker network embed-certs-946178 192.168.85.0/24 created
	I1029 09:33:57.322278  191189 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-946178" container
	I1029 09:33:57.322352  191189 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1029 09:33:57.342099  191189 cli_runner.go:164] Run: docker volume create embed-certs-946178 --label name.minikube.sigs.k8s.io=embed-certs-946178 --label created_by.minikube.sigs.k8s.io=true
	I1029 09:33:57.366018  191189 oci.go:103] Successfully created a docker volume embed-certs-946178
	I1029 09:33:57.366117  191189 cli_runner.go:164] Run: docker run --rm --name embed-certs-946178-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-946178 --entrypoint /usr/bin/test -v embed-certs-946178:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1029 09:33:57.995753  191189 oci.go:107] Successfully prepared a docker volume embed-certs-946178
	I1029 09:33:57.995799  191189 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:33:57.995819  191189 kic.go:194] Starting extracting preloaded images to volume ...
	I1029 09:33:57.995897  191189 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-946178:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1029 09:33:58.263082  189343 cli_runner.go:164] Run: docker network inspect no-preload-505993 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:33:58.278433  189343 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1029 09:33:58.282588  189343 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:33:58.293195  189343 kubeadm.go:884] updating cluster {Name:no-preload-505993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-505993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 09:33:58.293303  189343 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:33:58.293345  189343 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:33:58.331417  189343 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1029 09:33:58.331440  189343 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1029 09:33:58.331497  189343 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 09:33:58.331513  189343 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1029 09:33:58.331599  189343 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1029 09:33:58.331707  189343 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1029 09:33:58.331752  189343 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1029 09:33:58.331499  189343 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1029 09:33:58.331837  189343 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1029 09:33:58.331922  189343 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1029 09:33:58.334746  189343 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1029 09:33:58.335013  189343 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1029 09:33:58.335164  189343 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1029 09:33:58.335304  189343 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1029 09:33:58.335427  189343 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1029 09:33:58.335563  189343 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 09:33:58.335873  189343 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1029 09:33:58.336096  189343 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1029 09:33:58.559970  189343 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1029 09:33:58.573573  189343 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1029 09:33:58.580509  189343 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1029 09:33:58.588897  189343 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1029 09:33:58.600504  189343 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1029 09:33:58.600771  189343 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1029 09:33:58.603198  189343 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1029 09:33:58.669639  189343 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1029 09:33:58.669735  189343 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1029 09:33:58.669829  189343 ssh_runner.go:195] Run: which crictl
	I1029 09:33:58.873491  189343 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1029 09:33:58.873601  189343 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1029 09:33:58.873696  189343 ssh_runner.go:195] Run: which crictl
	I1029 09:33:58.873848  189343 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1029 09:33:58.873903  189343 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1029 09:33:58.873942  189343 ssh_runner.go:195] Run: which crictl
	I1029 09:33:58.905208  189343 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1029 09:33:58.905297  189343 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1029 09:33:58.905342  189343 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1029 09:33:58.905378  189343 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1029 09:33:58.905429  189343 ssh_runner.go:195] Run: which crictl
	I1029 09:33:58.905468  189343 ssh_runner.go:195] Run: which crictl
	I1029 09:33:58.905527  189343 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1029 09:33:58.905305  189343 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1029 09:33:58.905589  189343 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1029 09:33:58.905613  189343 ssh_runner.go:195] Run: which crictl
	I1029 09:33:58.905226  189343 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1029 09:33:58.905654  189343 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1029 09:33:58.905672  189343 ssh_runner.go:195] Run: which crictl
	I1029 09:33:58.905725  189343 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1029 09:33:58.905729  189343 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	W1029 09:33:58.906797  189343 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1029 09:33:58.906875  189343 retry.go:31] will retry after 328.472753ms: ssh: rejected: connect failed (open failed)
	I1029 09:33:58.987695  189343 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1029 09:33:58.987779  189343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-505993
	I1029 09:33:58.988010  189343 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1029 09:33:58.988113  189343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-505993
	I1029 09:33:58.988634  189343 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1029 09:33:58.988750  189343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-505993
	I1029 09:33:58.989266  189343 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1029 09:33:58.989329  189343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-505993
	I1029 09:33:58.989630  189343 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1029 09:33:58.989715  189343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-505993
	I1029 09:33:58.996498  189343 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1029 09:33:58.996580  189343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-505993
	I1029 09:33:59.096489  189343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/no-preload-505993/id_rsa Username:docker}
	I1029 09:33:59.096522  189343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/no-preload-505993/id_rsa Username:docker}
	I1029 09:33:59.097615  189343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/no-preload-505993/id_rsa Username:docker}
	I1029 09:33:59.109401  189343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/no-preload-505993/id_rsa Username:docker}
	I1029 09:33:59.112272  189343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/no-preload-505993/id_rsa Username:docker}
	I1029 09:33:59.117527  189343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/no-preload-505993/id_rsa Username:docker}
	I1029 09:33:59.441030  189343 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1029 09:33:59.441119  189343 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1029 09:33:59.491708  189343 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1029 09:33:59.585976  189343 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1029 09:33:59.586078  189343 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1029 09:33:59.631425  189343 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1029 09:33:59.632402  189343 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	W1029 09:33:59.680198  189343 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1029 09:33:59.680488  189343 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 09:33:59.761645  189343 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1029 09:33:59.761657  189343 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1029 09:33:59.761778  189343 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1029 09:33:59.761830  189343 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1029 09:33:59.761867  189343 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1029 09:33:59.761678  189343 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1029 09:33:59.762067  189343 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1029 09:33:59.761734  189343 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1029 09:33:59.766576  189343 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1029 09:33:59.936820  189343 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1029 09:33:59.936929  189343 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 09:33:59.937016  189343 ssh_runner.go:195] Run: which crictl
	I1029 09:33:59.973575  189343 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1029 09:33:59.973896  189343 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1029 09:33:59.973660  189343 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1029 09:33:59.974078  189343 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1029 09:33:59.973674  189343 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1029 09:33:59.974217  189343 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1029 09:33:59.973690  189343 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1029 09:33:59.973742  189343 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1029 09:33:59.974349  189343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1029 09:33:59.973756  189343 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1029 09:33:59.974445  189343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1029 09:33:59.973771  189343 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1029 09:33:59.974516  189343 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1029 09:33:59.973818  189343 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 09:33:59.974677  189343 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1029 09:34:00.104568  189343 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1029 09:34:00.104673  189343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1029 09:34:00.104923  189343 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1029 09:34:00.104978  189343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1029 09:34:00.105090  189343 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1029 09:34:00.105129  189343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1029 09:34:00.105241  189343 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1029 09:34:00.105291  189343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1029 09:34:00.105411  189343 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1029 09:34:00.105451  189343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1029 09:34:00.105594  189343 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 09:34:00.446441  189343 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 09:34:00.449029  189343 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1029 09:34:00.449098  189343 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1029 09:34:01.280387  189343 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1029 09:34:01.280505  189343 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1029 09:34:01.280556  189343 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1029 09:34:01.280580  189343 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1029 09:34:01.280616  189343 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1029 09:34:04.051761  191189 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-946178:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (6.055817927s)
	I1029 09:34:04.051795  191189 kic.go:203] duration metric: took 6.055972431s to extract preloaded images to volume ...
	W1029 09:34:04.051947  191189 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1029 09:34:04.052059  191189 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1029 09:34:04.134818  191189 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-946178 --name embed-certs-946178 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-946178 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-946178 --network embed-certs-946178 --ip 192.168.85.2 --volume embed-certs-946178:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1029 09:34:04.460812  191189 cli_runner.go:164] Run: docker container inspect embed-certs-946178 --format={{.State.Running}}
	I1029 09:34:04.490343  191189 cli_runner.go:164] Run: docker container inspect embed-certs-946178 --format={{.State.Status}}
	I1029 09:34:04.521359  191189 cli_runner.go:164] Run: docker exec embed-certs-946178 stat /var/lib/dpkg/alternatives/iptables
	I1029 09:34:04.585104  191189 oci.go:144] the created container "embed-certs-946178" has a running status.
	I1029 09:34:04.585134  191189 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/embed-certs-946178/id_rsa...
	I1029 09:34:05.163865  191189 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21800-2763/.minikube/machines/embed-certs-946178/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1029 09:34:05.199998  191189 cli_runner.go:164] Run: docker container inspect embed-certs-946178 --format={{.State.Status}}
	I1029 09:34:05.231034  191189 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1029 09:34:05.231059  191189 kic_runner.go:114] Args: [docker exec --privileged embed-certs-946178 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1029 09:34:05.307088  191189 cli_runner.go:164] Run: docker container inspect embed-certs-946178 --format={{.State.Status}}
	I1029 09:34:05.334031  191189 machine.go:94] provisionDockerMachine start ...
	I1029 09:34:05.334117  191189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:34:05.352994  191189 main.go:143] libmachine: Using SSH client type: native
	I1029 09:34:05.353328  191189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1029 09:34:05.353344  191189 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 09:34:05.354945  191189 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1029 09:34:04.676423  189343 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.395887956s)
	I1029 09:34:04.676465  189343 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1029 09:34:04.676496  189343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1029 09:34:04.676642  189343 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (3.396010311s)
	I1029 09:34:04.676655  189343 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1029 09:34:04.676675  189343 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1029 09:34:04.676725  189343 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1029 09:34:06.384081  189343 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.707329964s)
	I1029 09:34:06.384108  189343 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1029 09:34:06.384136  189343 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1029 09:34:06.384182  189343 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1029 09:34:08.560920  191189 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-946178
	
	I1029 09:34:08.560988  191189 ubuntu.go:182] provisioning hostname "embed-certs-946178"
	I1029 09:34:08.561105  191189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:34:08.590757  191189 main.go:143] libmachine: Using SSH client type: native
	I1029 09:34:08.591057  191189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1029 09:34:08.591075  191189 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-946178 && echo "embed-certs-946178" | sudo tee /etc/hostname
	I1029 09:34:08.774650  191189 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-946178
	
	I1029 09:34:08.774731  191189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:34:08.796530  191189 main.go:143] libmachine: Using SSH client type: native
	I1029 09:34:08.796899  191189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1029 09:34:08.796939  191189 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-946178' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-946178/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-946178' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 09:34:08.961633  191189 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 09:34:08.961710  191189 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-2763/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-2763/.minikube}
	I1029 09:34:08.961757  191189 ubuntu.go:190] setting up certificates
	I1029 09:34:08.961806  191189 provision.go:84] configureAuth start
	I1029 09:34:08.961896  191189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-946178
	I1029 09:34:08.983180  191189 provision.go:143] copyHostCerts
	I1029 09:34:08.983240  191189 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem, removing ...
	I1029 09:34:08.983249  191189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 09:34:08.983319  191189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem (1082 bytes)
	I1029 09:34:08.983402  191189 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem, removing ...
	I1029 09:34:08.983406  191189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 09:34:08.983433  191189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem (1123 bytes)
	I1029 09:34:08.983490  191189 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem, removing ...
	I1029 09:34:08.983494  191189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 09:34:08.983517  191189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem (1679 bytes)
	I1029 09:34:08.983582  191189 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem org=jenkins.embed-certs-946178 san=[127.0.0.1 192.168.85.2 embed-certs-946178 localhost minikube]
	I1029 09:34:09.343813  191189 provision.go:177] copyRemoteCerts
	I1029 09:34:09.343932  191189 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 09:34:09.344017  191189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:34:09.365871  191189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/embed-certs-946178/id_rsa Username:docker}
	I1029 09:34:09.472634  191189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 09:34:09.498016  191189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1029 09:34:09.520048  191189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 09:34:09.544798  191189 provision.go:87] duration metric: took 582.948547ms to configureAuth
	I1029 09:34:09.544877  191189 ubuntu.go:206] setting minikube options for container-runtime
	I1029 09:34:09.545106  191189 config.go:182] Loaded profile config "embed-certs-946178": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:34:09.545271  191189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:34:09.565275  191189 main.go:143] libmachine: Using SSH client type: native
	I1029 09:34:09.565577  191189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1029 09:34:09.565590  191189 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 09:34:09.850760  191189 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 09:34:09.850834  191189 machine.go:97] duration metric: took 4.516781568s to provisionDockerMachine
	I1029 09:34:09.850860  191189 client.go:176] duration metric: took 12.680468108s to LocalClient.Create
	I1029 09:34:09.850905  191189 start.go:167] duration metric: took 12.680541684s to libmachine.API.Create "embed-certs-946178"
	I1029 09:34:09.850917  191189 start.go:293] postStartSetup for "embed-certs-946178" (driver="docker")
	I1029 09:34:09.850938  191189 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 09:34:09.850998  191189 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 09:34:09.851038  191189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:34:09.870524  191189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/embed-certs-946178/id_rsa Username:docker}
	I1029 09:34:09.976768  191189 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 09:34:09.980853  191189 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 09:34:09.980881  191189 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 09:34:09.980891  191189 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/addons for local assets ...
	I1029 09:34:09.980946  191189 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/files for local assets ...
	I1029 09:34:09.981023  191189 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> 45502.pem in /etc/ssl/certs
	I1029 09:34:09.981137  191189 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 09:34:09.988889  191189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /etc/ssl/certs/45502.pem (1708 bytes)
	I1029 09:34:10.007873  191189 start.go:296] duration metric: took 156.927277ms for postStartSetup
	I1029 09:34:10.008474  191189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-946178
	I1029 09:34:10.035349  191189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/config.json ...
	I1029 09:34:10.035644  191189 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 09:34:10.035686  191189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:34:10.057288  191189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/embed-certs-946178/id_rsa Username:docker}
	I1029 09:34:10.162572  191189 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 09:34:10.168270  191189 start.go:128] duration metric: took 13.001518803s to createHost
	I1029 09:34:10.168292  191189 start.go:83] releasing machines lock for "embed-certs-946178", held for 13.001632822s
	I1029 09:34:10.168371  191189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-946178
	I1029 09:34:10.190063  191189 ssh_runner.go:195] Run: cat /version.json
	I1029 09:34:10.190115  191189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:34:10.190356  191189 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 09:34:10.190405  191189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:34:10.208301  191189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/embed-certs-946178/id_rsa Username:docker}
	I1029 09:34:10.226083  191189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/embed-certs-946178/id_rsa Username:docker}
	I1029 09:34:10.324198  191189 ssh_runner.go:195] Run: systemctl --version
	I1029 09:34:10.437716  191189 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 09:34:10.493284  191189 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 09:34:10.498471  191189 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 09:34:10.498569  191189 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 09:34:10.538680  191189 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1029 09:34:10.538726  191189 start.go:496] detecting cgroup driver to use...
	I1029 09:34:10.538777  191189 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1029 09:34:10.538869  191189 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 09:34:10.557448  191189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 09:34:10.571308  191189 docker.go:218] disabling cri-docker service (if available) ...
	I1029 09:34:10.571405  191189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 09:34:10.589657  191189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 09:34:10.615843  191189 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 09:34:10.755695  191189 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 09:34:10.910151  191189 docker.go:234] disabling docker service ...
	I1029 09:34:10.910250  191189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 09:34:10.939115  191189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 09:34:10.956363  191189 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 09:34:11.112131  191189 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 09:34:11.255099  191189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 09:34:11.269430  191189 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 09:34:11.290009  191189 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 09:34:11.290106  191189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:34:11.304005  191189 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1029 09:34:11.304109  191189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:34:11.314119  191189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:34:11.323767  191189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:34:11.333076  191189 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 09:34:11.341628  191189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:34:11.351375  191189 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:34:11.366527  191189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:34:11.375774  191189 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 09:34:11.384344  191189 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 09:34:11.392779  191189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:34:11.545345  191189 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 09:34:07.824447  189343 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.440245917s)
	I1029 09:34:07.824477  189343 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1029 09:34:07.824503  189343 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1029 09:34:07.824551  189343 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1029 09:34:09.549877  189343 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.725300684s)
	I1029 09:34:09.549908  189343 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1029 09:34:09.549929  189343 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1029 09:34:09.549974  189343 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1029 09:34:11.306249  189343 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.756249467s)
	I1029 09:34:11.306276  189343 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1029 09:34:11.306296  189343 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1029 09:34:11.306339  189343 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1029 09:34:12.106635  191189 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 09:34:12.106743  191189 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 09:34:12.110991  191189 start.go:564] Will wait 60s for crictl version
	I1029 09:34:12.111085  191189 ssh_runner.go:195] Run: which crictl
	I1029 09:34:12.115408  191189 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 09:34:12.141486  191189 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 09:34:12.141608  191189 ssh_runner.go:195] Run: crio --version
	I1029 09:34:12.174857  191189 ssh_runner.go:195] Run: crio --version
	I1029 09:34:12.209923  191189 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 09:34:12.212616  191189 cli_runner.go:164] Run: docker network inspect embed-certs-946178 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:34:12.234061  191189 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1029 09:34:12.238903  191189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:34:12.248910  191189 kubeadm.go:884] updating cluster {Name:embed-certs-946178 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-946178 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 09:34:12.249063  191189 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:34:12.249123  191189 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:34:12.302832  191189 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:34:12.302860  191189 crio.go:433] Images already preloaded, skipping extraction
	I1029 09:34:12.302922  191189 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:34:12.329473  191189 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:34:12.329545  191189 cache_images.go:86] Images are preloaded, skipping loading
	I1029 09:34:12.329568  191189 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1029 09:34:12.329706  191189 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-946178 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-946178 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 09:34:12.329823  191189 ssh_runner.go:195] Run: crio config
	I1029 09:34:12.387143  191189 cni.go:84] Creating CNI manager for ""
	I1029 09:34:12.387166  191189 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:34:12.387182  191189 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1029 09:34:12.387239  191189 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-946178 NodeName:embed-certs-946178 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 09:34:12.387407  191189 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-946178"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 09:34:12.387492  191189 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 09:34:12.396794  191189 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 09:34:12.396918  191189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 09:34:12.405942  191189 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1029 09:34:12.420950  191189 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 09:34:12.436297  191189 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1029 09:34:12.451333  191189 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1029 09:34:12.455378  191189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:34:12.466862  191189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:34:12.625201  191189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:34:12.643378  191189 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178 for IP: 192.168.85.2
	I1029 09:34:12.643448  191189 certs.go:195] generating shared ca certs ...
	I1029 09:34:12.643478  191189 certs.go:227] acquiring lock for ca certs: {Name:mk715611293338c39436fe9072c5ce0c2fb993a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:34:12.643651  191189 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key
	I1029 09:34:12.643733  191189 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key
	I1029 09:34:12.643766  191189 certs.go:257] generating profile certs ...
	I1029 09:34:12.643846  191189 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/client.key
	I1029 09:34:12.643884  191189 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/client.crt with IP's: []
	I1029 09:34:13.681622  191189 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/client.crt ...
	I1029 09:34:13.681695  191189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/client.crt: {Name:mk8508929c73f5d1fb3965eae35415f289815da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:34:13.681904  191189 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/client.key ...
	I1029 09:34:13.681941  191189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/client.key: {Name:mke78fb76590cdac3e0e745f17a1acb07cb46f65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:34:13.682087  191189 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/apiserver.key.8f5fae26
	I1029 09:34:13.682128  191189 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/apiserver.crt.8f5fae26 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1029 09:34:13.903189  191189 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/apiserver.crt.8f5fae26 ...
	I1029 09:34:13.903263  191189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/apiserver.crt.8f5fae26: {Name:mk95cad6a0f153462a63d4865bd4ca4bb8d1cd74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:34:13.903480  191189 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/apiserver.key.8f5fae26 ...
	I1029 09:34:13.903517  191189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/apiserver.key.8f5fae26: {Name:mk0226e789b79a1fb21ee7d5cec62b825f601548 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:34:13.903660  191189 certs.go:382] copying /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/apiserver.crt.8f5fae26 -> /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/apiserver.crt
	I1029 09:34:13.903788  191189 certs.go:386] copying /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/apiserver.key.8f5fae26 -> /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/apiserver.key
	I1029 09:34:13.903895  191189 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/proxy-client.key
	I1029 09:34:13.903932  191189 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/proxy-client.crt with IP's: []
	I1029 09:34:14.508065  191189 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/proxy-client.crt ...
	I1029 09:34:14.508136  191189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/proxy-client.crt: {Name:mkc6d776ab65a6aa3acf5d01c7f3cc26ad30768c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:34:14.508369  191189 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/proxy-client.key ...
	I1029 09:34:14.508405  191189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/proxy-client.key: {Name:mkf15fc84d2378aa1312314c62cb89454b1961f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:34:14.508648  191189 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem (1338 bytes)
	W1029 09:34:14.508714  191189 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550_empty.pem, impossibly tiny 0 bytes
	I1029 09:34:14.508738  191189 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem (1679 bytes)
	I1029 09:34:14.508795  191189 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem (1082 bytes)
	I1029 09:34:14.508844  191189 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem (1123 bytes)
	I1029 09:34:14.508903  191189 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem (1679 bytes)
	I1029 09:34:14.508974  191189 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem (1708 bytes)
	I1029 09:34:14.509573  191189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 09:34:14.527531  191189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 09:34:14.545858  191189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 09:34:14.576077  191189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1029 09:34:14.599757  191189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1029 09:34:14.620328  191189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1029 09:34:14.640471  191189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 09:34:14.659829  191189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1029 09:34:14.679037  191189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /usr/share/ca-certificates/45502.pem (1708 bytes)
	I1029 09:34:14.701456  191189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 09:34:14.720585  191189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem --> /usr/share/ca-certificates/4550.pem (1338 bytes)
	I1029 09:34:14.739942  191189 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 09:34:14.754459  191189 ssh_runner.go:195] Run: openssl version
	I1029 09:34:14.761458  191189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 09:34:14.770779  191189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:34:14.775163  191189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:34:14.775276  191189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:34:14.816688  191189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 09:34:14.825787  191189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4550.pem && ln -fs /usr/share/ca-certificates/4550.pem /etc/ssl/certs/4550.pem"
	I1029 09:34:14.834642  191189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4550.pem
	I1029 09:34:14.838831  191189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:27 /usr/share/ca-certificates/4550.pem
	I1029 09:34:14.838940  191189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4550.pem
	I1029 09:34:14.880496  191189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4550.pem /etc/ssl/certs/51391683.0"
	I1029 09:34:14.889476  191189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/45502.pem && ln -fs /usr/share/ca-certificates/45502.pem /etc/ssl/certs/45502.pem"
	I1029 09:34:14.898263  191189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/45502.pem
	I1029 09:34:14.902404  191189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:27 /usr/share/ca-certificates/45502.pem
	I1029 09:34:14.902543  191189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/45502.pem
	I1029 09:34:14.944042  191189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/45502.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 09:34:14.953009  191189 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 09:34:14.957336  191189 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1029 09:34:14.957441  191189 kubeadm.go:401] StartCluster: {Name:embed-certs-946178 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-946178 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:34:14.957564  191189 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 09:34:14.957649  191189 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 09:34:14.995246  191189 cri.go:89] found id: ""
	I1029 09:34:14.995336  191189 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 09:34:15.009146  191189 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1029 09:34:15.022564  191189 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1029 09:34:15.022651  191189 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1029 09:34:15.034866  191189 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1029 09:34:15.034921  191189 kubeadm.go:158] found existing configuration files:
	
	I1029 09:34:15.034982  191189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1029 09:34:15.044714  191189 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1029 09:34:15.044795  191189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1029 09:34:15.052905  191189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1029 09:34:15.062949  191189 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1029 09:34:15.063023  191189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1029 09:34:15.071231  191189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1029 09:34:15.080080  191189 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1029 09:34:15.080164  191189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1029 09:34:15.088403  191189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1029 09:34:15.099366  191189 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1029 09:34:15.099462  191189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1029 09:34:15.107917  191189 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1029 09:34:15.152009  191189 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1029 09:34:15.152374  191189 kubeadm.go:319] [preflight] Running pre-flight checks
	I1029 09:34:15.211209  191189 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1029 09:34:15.211288  191189 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1029 09:34:15.211332  191189 kubeadm.go:319] OS: Linux
	I1029 09:34:15.211388  191189 kubeadm.go:319] CGROUPS_CPU: enabled
	I1029 09:34:15.211445  191189 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1029 09:34:15.211502  191189 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1029 09:34:15.211564  191189 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1029 09:34:15.211624  191189 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1029 09:34:15.211696  191189 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1029 09:34:15.211749  191189 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1029 09:34:15.211803  191189 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1029 09:34:15.211856  191189 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1029 09:34:15.311420  191189 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1029 09:34:15.311562  191189 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1029 09:34:15.311688  191189 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1029 09:34:15.324728  191189 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1029 09:34:15.330545  191189 out.go:252]   - Generating certificates and keys ...
	I1029 09:34:15.330651  191189 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1029 09:34:15.330739  191189 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1029 09:34:15.570154  191189 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1029 09:34:16.560694  191189 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1029 09:34:16.105893  189343 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (4.799526952s)
	I1029 09:34:16.105917  189343 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1029 09:34:16.105935  189343 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1029 09:34:16.105990  189343 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1029 09:34:16.821405  189343 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1029 09:34:16.821437  189343 cache_images.go:125] Successfully loaded all cached images
	I1029 09:34:16.821443  189343 cache_images.go:94] duration metric: took 18.489991443s to LoadCachedImages
	I1029 09:34:16.821454  189343 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1029 09:34:16.821543  189343 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-505993 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-505993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 09:34:16.821636  189343 ssh_runner.go:195] Run: crio config
	I1029 09:34:16.912133  189343 cni.go:84] Creating CNI manager for ""
	I1029 09:34:16.920540  189343 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:34:16.920632  189343 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1029 09:34:16.920696  189343 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-505993 NodeName:no-preload-505993 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 09:34:16.920874  189343 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-505993"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 09:34:16.920989  189343 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 09:34:16.929780  189343 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1029 09:34:16.929905  189343 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1029 09:34:16.937742  189343 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1029 09:34:16.937911  189343 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1029 09:34:16.938348  189343 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21800-2763/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1029 09:34:16.938360  189343 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21800-2763/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1029 09:34:16.943192  189343 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1029 09:34:16.943227  189343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1029 09:34:17.879895  189343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:34:17.906926  189343 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1029 09:34:17.910797  189343 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1029 09:34:17.910828  189343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1029 09:34:17.913135  189343 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1029 09:34:17.931462  189343 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1029 09:34:17.931556  189343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1029 09:34:18.602529  189343 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 09:34:18.616601  189343 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1029 09:34:18.630251  189343 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 09:34:18.643154  189343 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1029 09:34:18.656180  189343 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1029 09:34:18.660001  189343 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:34:18.670524  189343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:34:18.828878  189343 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:34:18.846480  189343 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993 for IP: 192.168.76.2
	I1029 09:34:18.846512  189343 certs.go:195] generating shared ca certs ...
	I1029 09:34:18.846528  189343 certs.go:227] acquiring lock for ca certs: {Name:mk715611293338c39436fe9072c5ce0c2fb993a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:34:18.846678  189343 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key
	I1029 09:34:18.846725  189343 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key
	I1029 09:34:18.846743  189343 certs.go:257] generating profile certs ...
	I1029 09:34:18.846798  189343 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/client.key
	I1029 09:34:18.846821  189343 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/client.crt with IP's: []
	I1029 09:34:19.181301  189343 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/client.crt ...
	I1029 09:34:19.181331  189343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/client.crt: {Name:mk837ef84135596205b63305c9fbb9229a602bc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:34:19.181516  189343 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/client.key ...
	I1029 09:34:19.181531  189343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/client.key: {Name:mk563671224a1fcdb964bafeb2c89f833eb35fd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:34:19.181622  189343 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/apiserver.key.b0d46aaa
	I1029 09:34:19.181641  189343 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/apiserver.crt.b0d46aaa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1029 09:34:19.674679  189343 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/apiserver.crt.b0d46aaa ...
	I1029 09:34:19.674710  189343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/apiserver.crt.b0d46aaa: {Name:mkd8937208a9bfc43e87efc85985fdd675e742b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:34:19.674893  189343 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/apiserver.key.b0d46aaa ...
	I1029 09:34:19.674909  189343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/apiserver.key.b0d46aaa: {Name:mk27e55d226a3dda3b14e0d1bfc0cfbaf7074349 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:34:19.674999  189343 certs.go:382] copying /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/apiserver.crt.b0d46aaa -> /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/apiserver.crt
	I1029 09:34:19.675077  189343 certs.go:386] copying /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/apiserver.key.b0d46aaa -> /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/apiserver.key
	I1029 09:34:19.675137  189343 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/proxy-client.key
	I1029 09:34:19.675157  189343 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/proxy-client.crt with IP's: []
	I1029 09:34:20.190545  189343 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/proxy-client.crt ...
	I1029 09:34:20.190577  189343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/proxy-client.crt: {Name:mkcbedd10102688f335301634d257d047afe7ce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:34:20.190745  189343 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/proxy-client.key ...
	I1029 09:34:20.190762  189343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/proxy-client.key: {Name:mkea3a64496fa1efb97765b517fea382079c4b29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:34:20.190956  189343 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem (1338 bytes)
	W1029 09:34:20.190997  189343 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550_empty.pem, impossibly tiny 0 bytes
	I1029 09:34:20.191014  189343 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem (1679 bytes)
	I1029 09:34:20.191038  189343 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem (1082 bytes)
	I1029 09:34:20.191068  189343 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem (1123 bytes)
	I1029 09:34:20.191094  189343 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem (1679 bytes)
	I1029 09:34:20.191141  189343 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem (1708 bytes)
	I1029 09:34:20.191736  189343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 09:34:20.211408  189343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 09:34:20.231027  189343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 09:34:20.250859  189343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1029 09:34:20.269854  189343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1029 09:34:20.288039  189343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 09:34:20.306574  189343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 09:34:20.325209  189343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1029 09:34:20.347135  189343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 09:34:20.380853  189343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem --> /usr/share/ca-certificates/4550.pem (1338 bytes)
	I1029 09:34:20.405298  189343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /usr/share/ca-certificates/45502.pem (1708 bytes)
	I1029 09:34:20.428053  189343 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 09:34:20.442336  189343 ssh_runner.go:195] Run: openssl version
	I1029 09:34:20.449091  189343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 09:34:20.457918  189343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:34:20.461981  189343 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:34:20.462058  189343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:34:20.505245  189343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 09:34:20.514406  189343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4550.pem && ln -fs /usr/share/ca-certificates/4550.pem /etc/ssl/certs/4550.pem"
	I1029 09:34:20.523659  189343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4550.pem
	I1029 09:34:20.528083  189343 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:27 /usr/share/ca-certificates/4550.pem
	I1029 09:34:20.528164  189343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4550.pem
	I1029 09:34:20.569414  189343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4550.pem /etc/ssl/certs/51391683.0"
	I1029 09:34:20.578553  189343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/45502.pem && ln -fs /usr/share/ca-certificates/45502.pem /etc/ssl/certs/45502.pem"
	I1029 09:34:20.587323  189343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/45502.pem
	I1029 09:34:20.605848  189343 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:27 /usr/share/ca-certificates/45502.pem
	I1029 09:34:20.605941  189343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/45502.pem
	I1029 09:34:20.674059  189343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/45502.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 09:34:20.692055  189343 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 09:34:20.696610  189343 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1029 09:34:20.696661  189343 kubeadm.go:401] StartCluster: {Name:no-preload-505993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-505993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:34:20.696760  189343 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 09:34:20.696828  189343 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 09:34:20.737084  189343 cri.go:89] found id: ""
	I1029 09:34:20.737165  189343 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 09:34:20.748067  189343 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1029 09:34:20.758667  189343 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1029 09:34:20.758730  189343 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1029 09:34:20.769424  189343 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1029 09:34:20.769445  189343 kubeadm.go:158] found existing configuration files:
	
	I1029 09:34:20.769502  189343 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1029 09:34:20.778497  189343 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1029 09:34:20.778569  189343 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1029 09:34:20.786546  189343 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1029 09:34:20.795965  189343 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1029 09:34:20.796035  189343 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1029 09:34:20.804070  189343 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1029 09:34:20.813498  189343 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1029 09:34:20.813569  189343 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1029 09:34:20.821805  189343 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1029 09:34:20.831033  189343 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1029 09:34:20.831103  189343 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1029 09:34:20.839429  189343 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1029 09:34:20.897369  189343 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1029 09:34:20.898006  189343 kubeadm.go:319] [preflight] Running pre-flight checks
	I1029 09:34:20.925437  189343 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1029 09:34:20.925523  189343 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1029 09:34:20.925582  189343 kubeadm.go:319] OS: Linux
	I1029 09:34:20.925648  189343 kubeadm.go:319] CGROUPS_CPU: enabled
	I1029 09:34:20.925713  189343 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1029 09:34:20.925779  189343 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1029 09:34:20.925835  189343 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1029 09:34:20.925905  189343 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1029 09:34:20.925971  189343 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1029 09:34:20.926034  189343 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1029 09:34:20.926103  189343 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1029 09:34:20.926177  189343 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1029 09:34:20.999381  189343 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1029 09:34:20.999503  189343 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1029 09:34:20.999611  189343 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1029 09:34:21.020670  189343 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1029 09:34:16.848193  191189 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1029 09:34:18.904103  191189 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1029 09:34:19.260480  191189 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1029 09:34:19.262009  191189 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-946178 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1029 09:34:19.872234  191189 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1029 09:34:19.873468  191189 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-946178 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1029 09:34:20.138464  191189 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1029 09:34:20.257896  191189 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1029 09:34:20.786599  191189 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1029 09:34:20.787198  191189 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1029 09:34:21.189998  191189 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1029 09:34:21.812652  191189 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1029 09:34:21.026381  189343 out.go:252]   - Generating certificates and keys ...
	I1029 09:34:21.026485  189343 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1029 09:34:21.026565  189343 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1029 09:34:21.524479  189343 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1029 09:34:21.744649  189343 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1029 09:34:22.064010  191189 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1029 09:34:22.376119  191189 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1029 09:34:22.623901  191189 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1029 09:34:22.625722  191189 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1029 09:34:22.632635  191189 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1029 09:34:22.636073  191189 out.go:252]   - Booting up control plane ...
	I1029 09:34:22.636181  191189 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1029 09:34:22.637245  191189 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1029 09:34:22.639331  191189 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1029 09:34:22.659836  191189 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1029 09:34:22.660363  191189 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1029 09:34:22.670029  191189 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1029 09:34:22.670132  191189 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1029 09:34:22.670174  191189 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1029 09:34:22.840839  191189 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1029 09:34:22.840969  191189 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1029 09:34:23.340642  191189 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.142847ms
	I1029 09:34:23.342200  191189 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1029 09:34:23.342544  191189 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1029 09:34:23.342856  191189 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1029 09:34:23.343162  191189 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1029 09:34:22.215139  189343 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1029 09:34:22.622516  189343 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1029 09:34:23.336455  189343 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1029 09:34:23.336979  189343 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-505993] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1029 09:34:23.495410  189343 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1029 09:34:23.495801  189343 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-505993] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1029 09:34:24.464695  189343 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1029 09:34:24.781509  189343 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1029 09:34:25.456659  189343 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1029 09:34:25.457197  189343 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1029 09:34:26.582965  189343 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1029 09:34:26.935297  189343 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1029 09:34:27.039096  189343 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1029 09:34:27.717237  189343 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1029 09:34:28.052102  189343 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1029 09:34:28.052209  189343 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1029 09:34:28.056652  189343 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1029 09:34:27.978145  191189 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.634841291s
	I1029 09:34:30.713103  191189 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 7.369549773s
	I1029 09:34:28.060008  189343 out.go:252]   - Booting up control plane ...
	I1029 09:34:28.060116  189343 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1029 09:34:28.060204  189343 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1029 09:34:28.060274  189343 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1029 09:34:28.084485  189343 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1029 09:34:28.084876  189343 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1029 09:34:28.094867  189343 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1029 09:34:28.094972  189343 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1029 09:34:28.095017  189343 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1029 09:34:28.340859  189343 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1029 09:34:28.340985  189343 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1029 09:34:29.388700  189343 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.042218481s
	I1029 09:34:29.388822  189343 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1029 09:34:29.388913  189343 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1029 09:34:29.389026  189343 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1029 09:34:29.389130  189343 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1029 09:34:32.846049  191189 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 9.50307599s
	I1029 09:34:32.869793  191189 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1029 09:34:32.888132  191189 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1029 09:34:32.908946  191189 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1029 09:34:32.909442  191189 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-946178 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1029 09:34:32.937355  191189 kubeadm.go:319] [bootstrap-token] Using token: li3gdf.ikihhkw0pob8nswv
	I1029 09:34:32.940209  191189 out.go:252]   - Configuring RBAC rules ...
	I1029 09:34:32.940372  191189 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1029 09:34:32.956039  191189 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1029 09:34:32.969459  191189 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1029 09:34:32.974552  191189 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1029 09:34:32.981596  191189 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1029 09:34:32.986189  191189 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1029 09:34:33.254934  191189 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1029 09:34:33.746760  191189 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1029 09:34:34.256149  191189 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1029 09:34:34.257333  191189 kubeadm.go:319] 
	I1029 09:34:34.257414  191189 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1029 09:34:34.257421  191189 kubeadm.go:319] 
	I1029 09:34:34.257501  191189 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1029 09:34:34.257506  191189 kubeadm.go:319] 
	I1029 09:34:34.257533  191189 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1029 09:34:34.257594  191189 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1029 09:34:34.257647  191189 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1029 09:34:34.257654  191189 kubeadm.go:319] 
	I1029 09:34:34.257710  191189 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1029 09:34:34.257715  191189 kubeadm.go:319] 
	I1029 09:34:34.257765  191189 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1029 09:34:34.257769  191189 kubeadm.go:319] 
	I1029 09:34:34.257827  191189 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1029 09:34:34.257906  191189 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1029 09:34:34.257977  191189 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1029 09:34:34.257981  191189 kubeadm.go:319] 
	I1029 09:34:34.258069  191189 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1029 09:34:34.258149  191189 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1029 09:34:34.258154  191189 kubeadm.go:319] 
	I1029 09:34:34.258241  191189 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token li3gdf.ikihhkw0pob8nswv \
	I1029 09:34:34.258348  191189 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da4a5b90580f0f492e24f667f5676cec258425f736b389045aee440db981859e \
	I1029 09:34:34.258370  191189 kubeadm.go:319] 	--control-plane 
	I1029 09:34:34.258374  191189 kubeadm.go:319] 
	I1029 09:34:34.258462  191189 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1029 09:34:34.258467  191189 kubeadm.go:319] 
	I1029 09:34:34.258551  191189 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token li3gdf.ikihhkw0pob8nswv \
	I1029 09:34:34.259439  191189 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da4a5b90580f0f492e24f667f5676cec258425f736b389045aee440db981859e 
	I1029 09:34:34.265354  191189 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1029 09:34:34.265588  191189 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1029 09:34:34.265696  191189 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1029 09:34:34.265712  191189 cni.go:84] Creating CNI manager for ""
	I1029 09:34:34.265719  191189 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:34:34.270770  191189 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1029 09:34:34.273731  191189 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1029 09:34:34.277929  191189 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1029 09:34:34.277951  191189 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1029 09:34:34.326756  191189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1029 09:34:34.869796  191189 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1029 09:34:34.869918  191189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:34:34.870000  191189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-946178 minikube.k8s.io/updated_at=2025_10_29T09_34_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac minikube.k8s.io/name=embed-certs-946178 minikube.k8s.io/primary=true
	I1029 09:34:35.376437  191189 ops.go:34] apiserver oom_adj: -16
	I1029 09:34:35.376541  191189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:34:35.876796  191189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:34:36.376649  191189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:34:36.363123  189343 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.974143191s
	I1029 09:34:36.877375  191189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:34:37.377125  191189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:34:37.876974  191189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:34:37.979494  191189 kubeadm.go:1114] duration metric: took 3.109620639s to wait for elevateKubeSystemPrivileges
	I1029 09:34:37.979534  191189 kubeadm.go:403] duration metric: took 23.022096063s to StartCluster
	I1029 09:34:37.979551  191189 settings.go:142] acquiring lock: {Name:mkeba1c0d8e3656c2a05b6b6f1f81184498df216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:34:37.979610  191189 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:34:37.980708  191189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:34:37.980936  191189 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:34:37.981038  191189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1029 09:34:37.981299  191189 config.go:182] Loaded profile config "embed-certs-946178": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:34:37.981344  191189 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 09:34:37.981409  191189 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-946178"
	I1029 09:34:37.981433  191189 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-946178"
	I1029 09:34:37.981461  191189 host.go:66] Checking if "embed-certs-946178" exists ...
	I1029 09:34:37.981965  191189 cli_runner.go:164] Run: docker container inspect embed-certs-946178 --format={{.State.Status}}
	I1029 09:34:37.983391  191189 addons.go:70] Setting default-storageclass=true in profile "embed-certs-946178"
	I1029 09:34:37.983510  191189 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-946178"
	I1029 09:34:37.983854  191189 cli_runner.go:164] Run: docker container inspect embed-certs-946178 --format={{.State.Status}}
	I1029 09:34:37.984148  191189 out.go:179] * Verifying Kubernetes components...
	I1029 09:34:37.987647  191189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:34:38.019960  191189 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 09:34:38.024270  191189 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:34:38.024299  191189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1029 09:34:38.024393  191189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:34:38.035008  191189 addons.go:239] Setting addon default-storageclass=true in "embed-certs-946178"
	I1029 09:34:38.035056  191189 host.go:66] Checking if "embed-certs-946178" exists ...
	I1029 09:34:38.035497  191189 cli_runner.go:164] Run: docker container inspect embed-certs-946178 --format={{.State.Status}}
	I1029 09:34:38.071795  191189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/embed-certs-946178/id_rsa Username:docker}
	I1029 09:34:38.074857  191189 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1029 09:34:38.074878  191189 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1029 09:34:38.074945  191189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:34:38.111915  191189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/embed-certs-946178/id_rsa Username:docker}
	I1029 09:34:38.499091  191189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:34:38.513213  191189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:34:38.513664  191189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1029 09:34:38.586793  191189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1029 09:34:40.040229  191189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.541096575s)
	I1029 09:34:40.040363  191189 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.52665184s)
	I1029 09:34:40.040385  191189 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1029 09:34:40.040679  191189 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.527437193s)
	I1029 09:34:40.042849  191189 node_ready.go:35] waiting up to 6m0s for node "embed-certs-946178" to be "Ready" ...
	I1029 09:34:40.043165  191189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.456244819s)
	I1029 09:34:40.124393  191189 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1029 09:34:37.785227  189343 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 8.400827534s
	I1029 09:34:40.386604  189343 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 11.001994149s
	I1029 09:34:40.415621  189343 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1029 09:34:40.433987  189343 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1029 09:34:40.452794  189343 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1029 09:34:40.453267  189343 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-505993 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1029 09:34:40.467964  189343 kubeadm.go:319] [bootstrap-token] Using token: 1vyald.ykso430p70vjqd6v
	I1029 09:34:40.470950  189343 out.go:252]   - Configuring RBAC rules ...
	I1029 09:34:40.472557  189343 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1029 09:34:40.480125  189343 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1029 09:34:40.488878  189343 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1029 09:34:40.493033  189343 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1029 09:34:40.500847  189343 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1029 09:34:40.505117  189343 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1029 09:34:40.793795  189343 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1029 09:34:41.227069  189343 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1029 09:34:41.794027  189343 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1029 09:34:41.794994  189343 kubeadm.go:319] 
	I1029 09:34:41.795074  189343 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1029 09:34:41.795086  189343 kubeadm.go:319] 
	I1029 09:34:41.795168  189343 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1029 09:34:41.795180  189343 kubeadm.go:319] 
	I1029 09:34:41.795206  189343 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1029 09:34:41.795272  189343 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1029 09:34:41.795328  189343 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1029 09:34:41.795337  189343 kubeadm.go:319] 
	I1029 09:34:41.795400  189343 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1029 09:34:41.795411  189343 kubeadm.go:319] 
	I1029 09:34:41.795461  189343 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1029 09:34:41.795470  189343 kubeadm.go:319] 
	I1029 09:34:41.795531  189343 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1029 09:34:41.795637  189343 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1029 09:34:41.795714  189343 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1029 09:34:41.795723  189343 kubeadm.go:319] 
	I1029 09:34:41.795812  189343 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1029 09:34:41.795896  189343 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1029 09:34:41.795908  189343 kubeadm.go:319] 
	I1029 09:34:41.795998  189343 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 1vyald.ykso430p70vjqd6v \
	I1029 09:34:41.796110  189343 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da4a5b90580f0f492e24f667f5676cec258425f736b389045aee440db981859e \
	I1029 09:34:41.796135  189343 kubeadm.go:319] 	--control-plane 
	I1029 09:34:41.796146  189343 kubeadm.go:319] 
	I1029 09:34:41.796235  189343 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1029 09:34:41.796243  189343 kubeadm.go:319] 
	I1029 09:34:41.796651  189343 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 1vyald.ykso430p70vjqd6v \
	I1029 09:34:41.796835  189343 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da4a5b90580f0f492e24f667f5676cec258425f736b389045aee440db981859e 
	I1029 09:34:41.800489  189343 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1029 09:34:41.800725  189343 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1029 09:34:41.800838  189343 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1029 09:34:41.800854  189343 cni.go:84] Creating CNI manager for ""
	I1029 09:34:41.800862  189343 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:34:41.804055  189343 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1029 09:34:40.127645  191189 addons.go:515] duration metric: took 2.146266895s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1029 09:34:40.546563  191189 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-946178" context rescaled to 1 replicas
	I1029 09:34:41.806919  189343 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1029 09:34:41.811195  189343 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1029 09:34:41.811216  189343 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1029 09:34:41.825789  189343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1029 09:34:42.177163  189343 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1029 09:34:42.177279  189343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:34:42.177301  189343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-505993 minikube.k8s.io/updated_at=2025_10_29T09_34_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac minikube.k8s.io/name=no-preload-505993 minikube.k8s.io/primary=true
	I1029 09:34:42.384758  189343 ops.go:34] apiserver oom_adj: -16
	I1029 09:34:42.384864  189343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:34:42.885598  189343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:34:43.384939  189343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:34:43.885890  189343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:34:44.385002  189343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:34:44.885505  189343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:34:45.385887  189343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:34:45.885432  189343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:34:45.998484  189343 kubeadm.go:1114] duration metric: took 3.821260618s to wait for elevateKubeSystemPrivileges
	I1029 09:34:45.998514  189343 kubeadm.go:403] duration metric: took 25.301855809s to StartCluster
	I1029 09:34:45.998531  189343 settings.go:142] acquiring lock: {Name:mkeba1c0d8e3656c2a05b6b6f1f81184498df216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:34:45.998602  189343 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:34:46.000636  189343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:34:46.008722  189343 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:34:46.010630  189343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1029 09:34:46.010997  189343 config.go:182] Loaded profile config "no-preload-505993": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:34:46.011040  189343 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 09:34:46.011226  189343 addons.go:70] Setting storage-provisioner=true in profile "no-preload-505993"
	I1029 09:34:46.011251  189343 addons.go:239] Setting addon storage-provisioner=true in "no-preload-505993"
	I1029 09:34:46.011278  189343 host.go:66] Checking if "no-preload-505993" exists ...
	I1029 09:34:46.012035  189343 addons.go:70] Setting default-storageclass=true in profile "no-preload-505993"
	I1029 09:34:46.012105  189343 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-505993"
	I1029 09:34:46.012531  189343 cli_runner.go:164] Run: docker container inspect no-preload-505993 --format={{.State.Status}}
	I1029 09:34:46.017044  189343 cli_runner.go:164] Run: docker container inspect no-preload-505993 --format={{.State.Status}}
	I1029 09:34:46.037651  189343 out.go:179] * Verifying Kubernetes components...
	I1029 09:34:46.042624  189343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:34:46.061811  189343 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1029 09:34:42.046911  191189 node_ready.go:57] node "embed-certs-946178" has "Ready":"False" status (will retry)
	W1029 09:34:44.546567  191189 node_ready.go:57] node "embed-certs-946178" has "Ready":"False" status (will retry)
	I1029 09:34:46.066491  189343 addons.go:239] Setting addon default-storageclass=true in "no-preload-505993"
	I1029 09:34:46.066547  189343 host.go:66] Checking if "no-preload-505993" exists ...
	I1029 09:34:46.067028  189343 cli_runner.go:164] Run: docker container inspect no-preload-505993 --format={{.State.Status}}
	I1029 09:34:46.067179  189343 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:34:46.067195  189343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1029 09:34:46.067246  189343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-505993
	I1029 09:34:46.119070  189343 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1029 09:34:46.119097  189343 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1029 09:34:46.119161  189343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-505993
	I1029 09:34:46.120747  189343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/no-preload-505993/id_rsa Username:docker}
	I1029 09:34:46.156594  189343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/no-preload-505993/id_rsa Username:docker}
	I1029 09:34:46.424555  189343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:34:46.435720  189343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1029 09:34:46.526041  189343 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:34:46.526250  189343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1029 09:34:47.236283  189343 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1029 09:34:47.238732  189343 node_ready.go:35] waiting up to 6m0s for node "no-preload-505993" to be "Ready" ...
	I1029 09:34:47.279353  189343 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1029 09:34:47.045889  191189 node_ready.go:57] node "embed-certs-946178" has "Ready":"False" status (will retry)
	W1029 09:34:49.046036  191189 node_ready.go:57] node "embed-certs-946178" has "Ready":"False" status (will retry)
	W1029 09:34:51.047934  191189 node_ready.go:57] node "embed-certs-946178" has "Ready":"False" status (will retry)
	I1029 09:34:47.282197  189343 addons.go:515] duration metric: took 1.27114241s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1029 09:34:47.740397  189343 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-505993" context rescaled to 1 replicas
	W1029 09:34:49.243144  189343 node_ready.go:57] node "no-preload-505993" has "Ready":"False" status (will retry)
	W1029 09:34:51.741640  189343 node_ready.go:57] node "no-preload-505993" has "Ready":"False" status (will retry)
	W1029 09:34:53.545990  191189 node_ready.go:57] node "embed-certs-946178" has "Ready":"False" status (will retry)
	W1029 09:34:55.546239  191189 node_ready.go:57] node "embed-certs-946178" has "Ready":"False" status (will retry)
	W1029 09:34:53.742425  189343 node_ready.go:57] node "no-preload-505993" has "Ready":"False" status (will retry)
	W1029 09:34:56.241796  189343 node_ready.go:57] node "no-preload-505993" has "Ready":"False" status (will retry)
	W1029 09:34:58.047960  191189 node_ready.go:57] node "embed-certs-946178" has "Ready":"False" status (will retry)
	W1029 09:35:00.052901  191189 node_ready.go:57] node "embed-certs-946178" has "Ready":"False" status (will retry)
	W1029 09:34:58.242481  189343 node_ready.go:57] node "no-preload-505993" has "Ready":"False" status (will retry)
	W1029 09:35:00.264178  189343 node_ready.go:57] node "no-preload-505993" has "Ready":"False" status (will retry)
	I1029 09:35:01.242522  189343 node_ready.go:49] node "no-preload-505993" is "Ready"
	I1029 09:35:01.242560  189343 node_ready.go:38] duration metric: took 14.003796952s for node "no-preload-505993" to be "Ready" ...
	I1029 09:35:01.242579  189343 api_server.go:52] waiting for apiserver process to appear ...
	I1029 09:35:01.242647  189343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:35:01.263166  189343 api_server.go:72] duration metric: took 15.254386915s to wait for apiserver process to appear ...
	I1029 09:35:01.263195  189343 api_server.go:88] waiting for apiserver healthz status ...
	I1029 09:35:01.263215  189343 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1029 09:35:01.271530  189343 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1029 09:35:01.273135  189343 api_server.go:141] control plane version: v1.34.1
	I1029 09:35:01.273165  189343 api_server.go:131] duration metric: took 9.963172ms to wait for apiserver health ...
	I1029 09:35:01.273175  189343 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 09:35:01.276586  189343 system_pods.go:59] 8 kube-system pods found
	I1029 09:35:01.276627  189343 system_pods.go:61] "coredns-66bc5c9577-zpgms" [df9fb184-2e4c-40b9-8345-97ef36012e74] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:35:01.276636  189343 system_pods.go:61] "etcd-no-preload-505993" [0ac9ea1e-38b5-4626-a2dd-8cb0fa2d4b19] Running
	I1029 09:35:01.276642  189343 system_pods.go:61] "kindnet-9z7ks" [ecb0cb93-80cf-4699-8fe1-5da7367b2286] Running
	I1029 09:35:01.276647  189343 system_pods.go:61] "kube-apiserver-no-preload-505993" [e177136c-892b-494e-a2f3-1942b4df0f8a] Running
	I1029 09:35:01.276653  189343 system_pods.go:61] "kube-controller-manager-no-preload-505993" [8957f7ba-9959-45b8-936f-e59a75bb6c15] Running
	I1029 09:35:01.276657  189343 system_pods.go:61] "kube-proxy-r6974" [5d6b2c51-96d2-46e7-99af-e6a8f38f8fc6] Running
	I1029 09:35:01.276661  189343 system_pods.go:61] "kube-scheduler-no-preload-505993" [20ad34f6-1eb1-4020-867a-7b1b8ac9dc6e] Running
	I1029 09:35:01.276668  189343 system_pods.go:61] "storage-provisioner" [3b3fca69-516e-44b0-b831-1a8196bfe62b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:35:01.276679  189343 system_pods.go:74] duration metric: took 3.498287ms to wait for pod list to return data ...
	I1029 09:35:01.276690  189343 default_sa.go:34] waiting for default service account to be created ...
	I1029 09:35:01.279527  189343 default_sa.go:45] found service account: "default"
	I1029 09:35:01.279556  189343 default_sa.go:55] duration metric: took 2.859241ms for default service account to be created ...
	I1029 09:35:01.279568  189343 system_pods.go:116] waiting for k8s-apps to be running ...
	I1029 09:35:01.282930  189343 system_pods.go:86] 8 kube-system pods found
	I1029 09:35:01.282969  189343 system_pods.go:89] "coredns-66bc5c9577-zpgms" [df9fb184-2e4c-40b9-8345-97ef36012e74] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:35:01.282976  189343 system_pods.go:89] "etcd-no-preload-505993" [0ac9ea1e-38b5-4626-a2dd-8cb0fa2d4b19] Running
	I1029 09:35:01.283016  189343 system_pods.go:89] "kindnet-9z7ks" [ecb0cb93-80cf-4699-8fe1-5da7367b2286] Running
	I1029 09:35:01.283049  189343 system_pods.go:89] "kube-apiserver-no-preload-505993" [e177136c-892b-494e-a2f3-1942b4df0f8a] Running
	I1029 09:35:01.283065  189343 system_pods.go:89] "kube-controller-manager-no-preload-505993" [8957f7ba-9959-45b8-936f-e59a75bb6c15] Running
	I1029 09:35:01.283072  189343 system_pods.go:89] "kube-proxy-r6974" [5d6b2c51-96d2-46e7-99af-e6a8f38f8fc6] Running
	I1029 09:35:01.283077  189343 system_pods.go:89] "kube-scheduler-no-preload-505993" [20ad34f6-1eb1-4020-867a-7b1b8ac9dc6e] Running
	I1029 09:35:01.283083  189343 system_pods.go:89] "storage-provisioner" [3b3fca69-516e-44b0-b831-1a8196bfe62b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:35:01.283105  189343 retry.go:31] will retry after 285.6104ms: missing components: kube-dns
	I1029 09:35:01.574007  189343 system_pods.go:86] 8 kube-system pods found
	I1029 09:35:01.574049  189343 system_pods.go:89] "coredns-66bc5c9577-zpgms" [df9fb184-2e4c-40b9-8345-97ef36012e74] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:35:01.574056  189343 system_pods.go:89] "etcd-no-preload-505993" [0ac9ea1e-38b5-4626-a2dd-8cb0fa2d4b19] Running
	I1029 09:35:01.574063  189343 system_pods.go:89] "kindnet-9z7ks" [ecb0cb93-80cf-4699-8fe1-5da7367b2286] Running
	I1029 09:35:01.574070  189343 system_pods.go:89] "kube-apiserver-no-preload-505993" [e177136c-892b-494e-a2f3-1942b4df0f8a] Running
	I1029 09:35:01.574076  189343 system_pods.go:89] "kube-controller-manager-no-preload-505993" [8957f7ba-9959-45b8-936f-e59a75bb6c15] Running
	I1029 09:35:01.574080  189343 system_pods.go:89] "kube-proxy-r6974" [5d6b2c51-96d2-46e7-99af-e6a8f38f8fc6] Running
	I1029 09:35:01.574084  189343 system_pods.go:89] "kube-scheduler-no-preload-505993" [20ad34f6-1eb1-4020-867a-7b1b8ac9dc6e] Running
	I1029 09:35:01.574090  189343 system_pods.go:89] "storage-provisioner" [3b3fca69-516e-44b0-b831-1a8196bfe62b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:35:01.574107  189343 retry.go:31] will retry after 305.460292ms: missing components: kube-dns
	I1029 09:35:01.884548  189343 system_pods.go:86] 8 kube-system pods found
	I1029 09:35:01.884584  189343 system_pods.go:89] "coredns-66bc5c9577-zpgms" [df9fb184-2e4c-40b9-8345-97ef36012e74] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:35:01.884593  189343 system_pods.go:89] "etcd-no-preload-505993" [0ac9ea1e-38b5-4626-a2dd-8cb0fa2d4b19] Running
	I1029 09:35:01.884600  189343 system_pods.go:89] "kindnet-9z7ks" [ecb0cb93-80cf-4699-8fe1-5da7367b2286] Running
	I1029 09:35:01.884605  189343 system_pods.go:89] "kube-apiserver-no-preload-505993" [e177136c-892b-494e-a2f3-1942b4df0f8a] Running
	I1029 09:35:01.884610  189343 system_pods.go:89] "kube-controller-manager-no-preload-505993" [8957f7ba-9959-45b8-936f-e59a75bb6c15] Running
	I1029 09:35:01.884614  189343 system_pods.go:89] "kube-proxy-r6974" [5d6b2c51-96d2-46e7-99af-e6a8f38f8fc6] Running
	I1029 09:35:01.884618  189343 system_pods.go:89] "kube-scheduler-no-preload-505993" [20ad34f6-1eb1-4020-867a-7b1b8ac9dc6e] Running
	I1029 09:35:01.884624  189343 system_pods.go:89] "storage-provisioner" [3b3fca69-516e-44b0-b831-1a8196bfe62b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:35:01.884638  189343 retry.go:31] will retry after 346.91775ms: missing components: kube-dns
	I1029 09:35:02.235765  189343 system_pods.go:86] 8 kube-system pods found
	I1029 09:35:02.235798  189343 system_pods.go:89] "coredns-66bc5c9577-zpgms" [df9fb184-2e4c-40b9-8345-97ef36012e74] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:35:02.235805  189343 system_pods.go:89] "etcd-no-preload-505993" [0ac9ea1e-38b5-4626-a2dd-8cb0fa2d4b19] Running
	I1029 09:35:02.235813  189343 system_pods.go:89] "kindnet-9z7ks" [ecb0cb93-80cf-4699-8fe1-5da7367b2286] Running
	I1029 09:35:02.235818  189343 system_pods.go:89] "kube-apiserver-no-preload-505993" [e177136c-892b-494e-a2f3-1942b4df0f8a] Running
	I1029 09:35:02.235823  189343 system_pods.go:89] "kube-controller-manager-no-preload-505993" [8957f7ba-9959-45b8-936f-e59a75bb6c15] Running
	I1029 09:35:02.235827  189343 system_pods.go:89] "kube-proxy-r6974" [5d6b2c51-96d2-46e7-99af-e6a8f38f8fc6] Running
	I1029 09:35:02.235831  189343 system_pods.go:89] "kube-scheduler-no-preload-505993" [20ad34f6-1eb1-4020-867a-7b1b8ac9dc6e] Running
	I1029 09:35:02.235838  189343 system_pods.go:89] "storage-provisioner" [3b3fca69-516e-44b0-b831-1a8196bfe62b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:35:02.235856  189343 retry.go:31] will retry after 409.098976ms: missing components: kube-dns
	I1029 09:35:02.649515  189343 system_pods.go:86] 8 kube-system pods found
	I1029 09:35:02.649549  189343 system_pods.go:89] "coredns-66bc5c9577-zpgms" [df9fb184-2e4c-40b9-8345-97ef36012e74] Running
	I1029 09:35:02.649557  189343 system_pods.go:89] "etcd-no-preload-505993" [0ac9ea1e-38b5-4626-a2dd-8cb0fa2d4b19] Running
	I1029 09:35:02.649562  189343 system_pods.go:89] "kindnet-9z7ks" [ecb0cb93-80cf-4699-8fe1-5da7367b2286] Running
	I1029 09:35:02.649566  189343 system_pods.go:89] "kube-apiserver-no-preload-505993" [e177136c-892b-494e-a2f3-1942b4df0f8a] Running
	I1029 09:35:02.649572  189343 system_pods.go:89] "kube-controller-manager-no-preload-505993" [8957f7ba-9959-45b8-936f-e59a75bb6c15] Running
	I1029 09:35:02.649576  189343 system_pods.go:89] "kube-proxy-r6974" [5d6b2c51-96d2-46e7-99af-e6a8f38f8fc6] Running
	I1029 09:35:02.649581  189343 system_pods.go:89] "kube-scheduler-no-preload-505993" [20ad34f6-1eb1-4020-867a-7b1b8ac9dc6e] Running
	I1029 09:35:02.649585  189343 system_pods.go:89] "storage-provisioner" [3b3fca69-516e-44b0-b831-1a8196bfe62b] Running
	I1029 09:35:02.649593  189343 system_pods.go:126] duration metric: took 1.370019569s to wait for k8s-apps to be running ...
	I1029 09:35:02.649606  189343 system_svc.go:44] waiting for kubelet service to be running ....
	I1029 09:35:02.649668  189343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:35:02.665549  189343 system_svc.go:56] duration metric: took 15.922567ms WaitForService to wait for kubelet
	I1029 09:35:02.665621  189343 kubeadm.go:587] duration metric: took 16.656845838s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:35:02.665645  189343 node_conditions.go:102] verifying NodePressure condition ...
	I1029 09:35:02.668598  189343 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1029 09:35:02.668634  189343 node_conditions.go:123] node cpu capacity is 2
	I1029 09:35:02.668649  189343 node_conditions.go:105] duration metric: took 2.998311ms to run NodePressure ...
	I1029 09:35:02.668662  189343 start.go:242] waiting for startup goroutines ...
	I1029 09:35:02.668670  189343 start.go:247] waiting for cluster config update ...
	I1029 09:35:02.668683  189343 start.go:256] writing updated cluster config ...
	I1029 09:35:02.668985  189343 ssh_runner.go:195] Run: rm -f paused
	I1029 09:35:02.673642  189343 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:35:02.678756  189343 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zpgms" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:35:02.684815  189343 pod_ready.go:94] pod "coredns-66bc5c9577-zpgms" is "Ready"
	I1029 09:35:02.684846  189343 pod_ready.go:86] duration metric: took 6.061821ms for pod "coredns-66bc5c9577-zpgms" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:35:02.687518  189343 pod_ready.go:83] waiting for pod "etcd-no-preload-505993" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:35:02.692615  189343 pod_ready.go:94] pod "etcd-no-preload-505993" is "Ready"
	I1029 09:35:02.692644  189343 pod_ready.go:86] duration metric: took 5.094214ms for pod "etcd-no-preload-505993" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:35:02.695289  189343 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-505993" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:35:02.700126  189343 pod_ready.go:94] pod "kube-apiserver-no-preload-505993" is "Ready"
	I1029 09:35:02.700153  189343 pod_ready.go:86] duration metric: took 4.793494ms for pod "kube-apiserver-no-preload-505993" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:35:02.702610  189343 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-505993" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:35:03.079335  189343 pod_ready.go:94] pod "kube-controller-manager-no-preload-505993" is "Ready"
	I1029 09:35:03.079361  189343 pod_ready.go:86] duration metric: took 376.726696ms for pod "kube-controller-manager-no-preload-505993" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:35:03.279424  189343 pod_ready.go:83] waiting for pod "kube-proxy-r6974" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:35:03.679398  189343 pod_ready.go:94] pod "kube-proxy-r6974" is "Ready"
	I1029 09:35:03.679444  189343 pod_ready.go:86] duration metric: took 399.994156ms for pod "kube-proxy-r6974" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:35:03.879832  189343 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-505993" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:35:04.278658  189343 pod_ready.go:94] pod "kube-scheduler-no-preload-505993" is "Ready"
	I1029 09:35:04.278685  189343 pod_ready.go:86] duration metric: took 398.826836ms for pod "kube-scheduler-no-preload-505993" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:35:04.278699  189343 pod_ready.go:40] duration metric: took 1.605023716s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:35:04.337723  189343 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1029 09:35:04.341198  189343 out.go:179] * Done! kubectl is now configured to use "no-preload-505993" cluster and "default" namespace by default
	W1029 09:35:02.547510  191189 node_ready.go:57] node "embed-certs-946178" has "Ready":"False" status (will retry)
	W1029 09:35:05.046553  191189 node_ready.go:57] node "embed-certs-946178" has "Ready":"False" status (will retry)
	W1029 09:35:07.545877  191189 node_ready.go:57] node "embed-certs-946178" has "Ready":"False" status (will retry)
	W1029 09:35:10.047389  191189 node_ready.go:57] node "embed-certs-946178" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 29 09:35:01 no-preload-505993 crio[841]: time="2025-10-29T09:35:01.600234648Z" level=info msg="Created container edd7c8d2af7ceaada947ae51700913553f7a3f5ea8ee72dd56ad4ae3dadf9b39: kube-system/coredns-66bc5c9577-zpgms/coredns" id=8d69542c-33c6-46a9-89bf-e3b3689dfed1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:35:01 no-preload-505993 crio[841]: time="2025-10-29T09:35:01.601891811Z" level=info msg="Starting container: edd7c8d2af7ceaada947ae51700913553f7a3f5ea8ee72dd56ad4ae3dadf9b39" id=e3e53613-d974-4711-8574-8246b1f3d4cd name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:35:01 no-preload-505993 crio[841]: time="2025-10-29T09:35:01.606332087Z" level=info msg="Started container" PID=2507 containerID=edd7c8d2af7ceaada947ae51700913553f7a3f5ea8ee72dd56ad4ae3dadf9b39 description=kube-system/coredns-66bc5c9577-zpgms/coredns id=e3e53613-d974-4711-8574-8246b1f3d4cd name=/runtime.v1.RuntimeService/StartContainer sandboxID=2fce9c4118534dab72f25d2fbbab048447d4542c451852f0683da7f716fad1f0
	Oct 29 09:35:04 no-preload-505993 crio[841]: time="2025-10-29T09:35:04.877166621Z" level=info msg="Running pod sandbox: default/busybox/POD" id=7fcdcf7f-76af-4b28-8e12-f33dc7835fd6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:35:04 no-preload-505993 crio[841]: time="2025-10-29T09:35:04.877241485Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:35:04 no-preload-505993 crio[841]: time="2025-10-29T09:35:04.882256922Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:25c238e75c80f4977e107312f8cfef7ec7bb018e915db671be39a3cd525dfb7e UID:e30fb005-524d-4e90-8800-e6ce95927686 NetNS:/var/run/netns/105f2ae6-a572-42a9-b455-89a7b780326a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012bb90}] Aliases:map[]}"
	Oct 29 09:35:04 no-preload-505993 crio[841]: time="2025-10-29T09:35:04.882426507Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 29 09:35:04 no-preload-505993 crio[841]: time="2025-10-29T09:35:04.893319614Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:25c238e75c80f4977e107312f8cfef7ec7bb018e915db671be39a3cd525dfb7e UID:e30fb005-524d-4e90-8800-e6ce95927686 NetNS:/var/run/netns/105f2ae6-a572-42a9-b455-89a7b780326a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012bb90}] Aliases:map[]}"
	Oct 29 09:35:04 no-preload-505993 crio[841]: time="2025-10-29T09:35:04.893656052Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 29 09:35:04 no-preload-505993 crio[841]: time="2025-10-29T09:35:04.897320617Z" level=info msg="Ran pod sandbox 25c238e75c80f4977e107312f8cfef7ec7bb018e915db671be39a3cd525dfb7e with infra container: default/busybox/POD" id=7fcdcf7f-76af-4b28-8e12-f33dc7835fd6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:35:04 no-preload-505993 crio[841]: time="2025-10-29T09:35:04.898504816Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=81137aea-1426-494b-99b8-41157ce5ecc9 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:35:04 no-preload-505993 crio[841]: time="2025-10-29T09:35:04.89872579Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=81137aea-1426-494b-99b8-41157ce5ecc9 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:35:04 no-preload-505993 crio[841]: time="2025-10-29T09:35:04.898833065Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=81137aea-1426-494b-99b8-41157ce5ecc9 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:35:04 no-preload-505993 crio[841]: time="2025-10-29T09:35:04.899638783Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=014ea03b-d8d1-488c-bdb1-2b8255359512 name=/runtime.v1.ImageService/PullImage
	Oct 29 09:35:04 no-preload-505993 crio[841]: time="2025-10-29T09:35:04.9012866Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 29 09:35:06 no-preload-505993 crio[841]: time="2025-10-29T09:35:06.869924354Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=014ea03b-d8d1-488c-bdb1-2b8255359512 name=/runtime.v1.ImageService/PullImage
	Oct 29 09:35:06 no-preload-505993 crio[841]: time="2025-10-29T09:35:06.870530694Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=55a12251-e189-49b8-bf17-b51ce2fa013f name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:35:06 no-preload-505993 crio[841]: time="2025-10-29T09:35:06.872185034Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e06fd160-584b-447f-b11c-9046450075b7 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:35:06 no-preload-505993 crio[841]: time="2025-10-29T09:35:06.879777182Z" level=info msg="Creating container: default/busybox/busybox" id=3b614469-17b1-4d18-a8bc-0f8a4d2f6099 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:35:06 no-preload-505993 crio[841]: time="2025-10-29T09:35:06.879906283Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:35:06 no-preload-505993 crio[841]: time="2025-10-29T09:35:06.884878002Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:35:06 no-preload-505993 crio[841]: time="2025-10-29T09:35:06.885351204Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:35:06 no-preload-505993 crio[841]: time="2025-10-29T09:35:06.901314082Z" level=info msg="Created container 8e6398911c0c01e1ba44d20088e45e0c8a8bd161ef4b4d66f41f4729599b94a3: default/busybox/busybox" id=3b614469-17b1-4d18-a8bc-0f8a4d2f6099 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:35:06 no-preload-505993 crio[841]: time="2025-10-29T09:35:06.902337336Z" level=info msg="Starting container: 8e6398911c0c01e1ba44d20088e45e0c8a8bd161ef4b4d66f41f4729599b94a3" id=dcee81bf-445a-46c3-8e1b-c73aa7d38347 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:35:06 no-preload-505993 crio[841]: time="2025-10-29T09:35:06.904629057Z" level=info msg="Started container" PID=2559 containerID=8e6398911c0c01e1ba44d20088e45e0c8a8bd161ef4b4d66f41f4729599b94a3 description=default/busybox/busybox id=dcee81bf-445a-46c3-8e1b-c73aa7d38347 name=/runtime.v1.RuntimeService/StartContainer sandboxID=25c238e75c80f4977e107312f8cfef7ec7bb018e915db671be39a3cd525dfb7e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	8e6398911c0c0       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   25c238e75c80f       busybox                                     default
	edd7c8d2af7ce       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago      Running             coredns                   0                   2fce9c4118534       coredns-66bc5c9577-zpgms                    kube-system
	024fc38387e5e       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      12 seconds ago      Running             storage-provisioner       0                   3e7e5eb7ba13c       storage-provisioner                         kube-system
	20fda1a2cf369       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    23 seconds ago      Running             kindnet-cni               0                   9ee763d3b63eb       kindnet-9z7ks                               kube-system
	c5acba1e10fc1       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      25 seconds ago      Running             kube-proxy                0                   5fb62490e4422       kube-proxy-r6974                            kube-system
	077b1e2aeada2       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      44 seconds ago      Running             kube-controller-manager   0                   e3e2af04bd0ac       kube-controller-manager-no-preload-505993   kube-system
	d3855c2708aa0       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      44 seconds ago      Running             etcd                      0                   5ed30c261b731       etcd-no-preload-505993                      kube-system
	1330d55a4a9cf       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      44 seconds ago      Running             kube-apiserver            0                   f6aa019dfb1f6       kube-apiserver-no-preload-505993            kube-system
	d740011c51ca9       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      44 seconds ago      Running             kube-scheduler            0                   a376cd02d0f3e       kube-scheduler-no-preload-505993            kube-system
	
	
	==> coredns [edd7c8d2af7ceaada947ae51700913553f7a3f5ea8ee72dd56ad4ae3dadf9b39] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50610 - 33886 "HINFO IN 6021609639669634836.5362116174814411716. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.006402501s
	
	
	==> describe nodes <==
	Name:               no-preload-505993
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-505993
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=no-preload-505993
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_34_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:34:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-505993
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:35:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:35:12 +0000   Wed, 29 Oct 2025 09:34:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:35:12 +0000   Wed, 29 Oct 2025 09:34:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:35:12 +0000   Wed, 29 Oct 2025 09:34:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 09:35:12 +0000   Wed, 29 Oct 2025 09:35:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-505993
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                389ba72a-ee76-4894-8bbe-d133735524b8
	  Boot ID:                    dcb5a4aa-84b0-4edf-aeb7-a96ccf6c0882
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-zpgms                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-no-preload-505993                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33s
	  kube-system                 kindnet-9z7ks                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-no-preload-505993             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-no-preload-505993    200m (10%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-r6974                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-no-preload-505993             100m (5%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 25s                kube-proxy       
	  Normal   Starting                 45s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 45s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  45s (x8 over 45s)  kubelet          Node no-preload-505993 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node no-preload-505993 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     45s (x8 over 45s)  kubelet          Node no-preload-505993 status is now: NodeHasSufficientPID
	  Normal   Starting                 33s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 33s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  33s                kubelet          Node no-preload-505993 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    33s                kubelet          Node no-preload-505993 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     33s                kubelet          Node no-preload-505993 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           29s                node-controller  Node no-preload-505993 event: Registered Node no-preload-505993 in Controller
	  Normal   NodeReady                13s                kubelet          Node no-preload-505993 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct29 09:06] overlayfs: idmapped layers are currently not supported
	[Oct29 09:07] overlayfs: idmapped layers are currently not supported
	[Oct29 09:08] overlayfs: idmapped layers are currently not supported
	[Oct29 09:10] overlayfs: idmapped layers are currently not supported
	[ +24.018500] overlayfs: idmapped layers are currently not supported
	[  +4.070732] overlayfs: idmapped layers are currently not supported
	[Oct29 09:11] overlayfs: idmapped layers are currently not supported
	[ +18.424492] overlayfs: idmapped layers are currently not supported
	[  +4.342269] hrtimer: interrupt took 2289025 ns
	[Oct29 09:12] overlayfs: idmapped layers are currently not supported
	[Oct29 09:13] overlayfs: idmapped layers are currently not supported
	[Oct29 09:14] overlayfs: idmapped layers are currently not supported
	[Oct29 09:20] overlayfs: idmapped layers are currently not supported
	[Oct29 09:23] overlayfs: idmapped layers are currently not supported
	[Oct29 09:24] overlayfs: idmapped layers are currently not supported
	[ +30.917844] overlayfs: idmapped layers are currently not supported
	[Oct29 09:27] overlayfs: idmapped layers are currently not supported
	[Oct29 09:29] overlayfs: idmapped layers are currently not supported
	[Oct29 09:30] overlayfs: idmapped layers are currently not supported
	[  +5.608805] overlayfs: idmapped layers are currently not supported
	[ +37.422429] overlayfs: idmapped layers are currently not supported
	[Oct29 09:31] overlayfs: idmapped layers are currently not supported
	[Oct29 09:32] overlayfs: idmapped layers are currently not supported
	[Oct29 09:34] overlayfs: idmapped layers are currently not supported
	[ +22.728709] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d3855c2708aa0ac8b7fb9c0372cca1703cea6eca80859ee3781d25f230711df8] <==
	{"level":"warn","ts":"2025-10-29T09:34:35.406106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:35.493195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:35.499766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:35.524871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:35.542208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:35.560547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:35.583945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:35.601429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:35.668724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:35.674039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:35.692060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:35.720119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:35.731693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:35.767547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:35.783681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:35.802388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:35.822549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:35.840693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:35.865797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:35.885787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:35.917645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:35.946673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:35.981168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:36.037776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:36.245624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40350","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:35:14 up  1:17,  0 user,  load average: 4.05, 3.74, 2.80
	Linux no-preload-505993 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [20fda1a2cf3693d4d728f48002b37761898beda95a85262f8d0393ef4f0c97b5] <==
	I1029 09:34:50.553882       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:34:50.554162       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1029 09:34:50.554284       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:34:50.554306       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:34:50.554320       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:34:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:34:50.753068       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:34:50.844401       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:34:50.844519       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:34:50.845621       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1029 09:34:51.146336       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 09:34:51.146439       1 metrics.go:72] Registering metrics
	I1029 09:34:51.146544       1 controller.go:711] "Syncing nftables rules"
	I1029 09:35:00.760476       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1029 09:35:00.760539       1 main.go:301] handling current node
	I1029 09:35:10.754872       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1029 09:35:10.754908       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1330d55a4a9cf4512f00154f2582f08066274959fbfe8b2f3f0239487a799ff9] <==
	I1029 09:34:37.617979       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1029 09:34:37.629732       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1029 09:34:37.630127       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1029 09:34:37.630261       1 aggregator.go:171] initial CRD sync complete...
	I1029 09:34:37.630279       1 autoregister_controller.go:144] Starting autoregister controller
	I1029 09:34:37.630286       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1029 09:34:37.630292       1 cache.go:39] Caches are synced for autoregister controller
	I1029 09:34:38.149768       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1029 09:34:38.163310       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1029 09:34:38.164088       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:34:39.594120       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:34:39.934216       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 09:34:40.195759       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1029 09:34:40.217182       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1029 09:34:40.218399       1 controller.go:667] quota admission added evaluator for: endpoints
	I1029 09:34:40.226731       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1029 09:34:40.260370       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 09:34:41.207837       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1029 09:34:41.224741       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1029 09:34:41.235412       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1029 09:34:46.181952       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1029 09:34:46.370568       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:34:46.399233       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1029 09:34:46.507880       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1029 09:35:12.695571       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:48966: use of closed network connection
	
	
	==> kube-controller-manager [077b1e2aeada2f0834a02795f86c54e9466cb1e5ed717c27108e2c8322ff7a2f] <==
	I1029 09:34:45.300579       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1029 09:34:45.300656       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1029 09:34:45.300814       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-505993"
	I1029 09:34:45.300896       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1029 09:34:45.301040       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1029 09:34:45.301098       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1029 09:34:45.301563       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1029 09:34:45.303430       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1029 09:34:45.304182       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:34:45.304266       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1029 09:34:45.304299       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1029 09:34:45.301599       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1029 09:34:45.305186       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1029 09:34:45.305659       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1029 09:34:45.313753       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:34:45.314236       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1029 09:34:45.318481       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1029 09:34:45.319444       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1029 09:34:45.319526       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1029 09:34:45.319550       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1029 09:34:45.319566       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1029 09:34:45.319573       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1029 09:34:45.332878       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-505993" podCIDRs=["10.244.0.0/24"]
	I1029 09:34:45.339591       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1029 09:35:05.304181       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [c5acba1e10fc1c81a5bc059642cdba2fdb551ae71bc9b756b3708ed354b9d896] <==
	I1029 09:34:48.460753       1 server_linux.go:53] "Using iptables proxy"
	I1029 09:34:48.554032       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 09:34:48.655149       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 09:34:48.655199       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1029 09:34:48.655269       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 09:34:48.737132       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:34:48.737185       1 server_linux.go:132] "Using iptables Proxier"
	I1029 09:34:48.750801       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 09:34:48.777788       1 server.go:527] "Version info" version="v1.34.1"
	I1029 09:34:48.777813       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:34:48.784232       1 config.go:200] "Starting service config controller"
	I1029 09:34:48.786170       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 09:34:48.786221       1 config.go:106] "Starting endpoint slice config controller"
	I1029 09:34:48.786226       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 09:34:48.786239       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 09:34:48.786244       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 09:34:48.791389       1 config.go:309] "Starting node config controller"
	I1029 09:34:48.791408       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 09:34:48.791415       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 09:34:48.886776       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 09:34:48.886811       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1029 09:34:48.886810       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [d740011c51ca9ccd755c8d4d5e9cdd28fc676e3102d46e9b26ff40e08be90eef] <==
	E1029 09:34:37.784630       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1029 09:34:37.784674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1029 09:34:37.784732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1029 09:34:37.784766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1029 09:34:37.784799       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1029 09:34:37.784832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1029 09:34:37.784969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1029 09:34:37.789393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1029 09:34:37.789506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1029 09:34:37.789559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1029 09:34:37.789612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1029 09:34:37.789703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1029 09:34:37.789712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1029 09:34:37.789758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1029 09:34:37.796776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1029 09:34:38.605308       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1029 09:34:38.611681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1029 09:34:38.633177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1029 09:34:38.658132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1029 09:34:38.710831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1029 09:34:38.713880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1029 09:34:38.779874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1029 09:34:38.785710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1029 09:34:39.256612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1029 09:34:41.842841       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 09:34:46 no-preload-505993 kubelet[2030]: I1029 09:34:46.413298    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m9cv\" (UniqueName: \"kubernetes.io/projected/5d6b2c51-96d2-46e7-99af-e6a8f38f8fc6-kube-api-access-5m9cv\") pod \"kube-proxy-r6974\" (UID: \"5d6b2c51-96d2-46e7-99af-e6a8f38f8fc6\") " pod="kube-system/kube-proxy-r6974"
	Oct 29 09:34:46 no-preload-505993 kubelet[2030]: E1029 09:34:46.466153    2030 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-505993\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-505993' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 29 09:34:46 no-preload-505993 kubelet[2030]: E1029 09:34:46.466241    2030 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-r6974\" is forbidden: User \"system:node:no-preload-505993\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-505993' and this object" podUID="5d6b2c51-96d2-46e7-99af-e6a8f38f8fc6" pod="kube-system/kube-proxy-r6974"
	Oct 29 09:34:46 no-preload-505993 kubelet[2030]: E1029 09:34:46.466435    2030 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:no-preload-505993\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-505993' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 29 09:34:46 no-preload-505993 kubelet[2030]: I1029 09:34:46.517292    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzbpt\" (UniqueName: \"kubernetes.io/projected/ecb0cb93-80cf-4699-8fe1-5da7367b2286-kube-api-access-fzbpt\") pod \"kindnet-9z7ks\" (UID: \"ecb0cb93-80cf-4699-8fe1-5da7367b2286\") " pod="kube-system/kindnet-9z7ks"
	Oct 29 09:34:46 no-preload-505993 kubelet[2030]: I1029 09:34:46.517374    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ecb0cb93-80cf-4699-8fe1-5da7367b2286-cni-cfg\") pod \"kindnet-9z7ks\" (UID: \"ecb0cb93-80cf-4699-8fe1-5da7367b2286\") " pod="kube-system/kindnet-9z7ks"
	Oct 29 09:34:46 no-preload-505993 kubelet[2030]: I1029 09:34:46.517395    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ecb0cb93-80cf-4699-8fe1-5da7367b2286-xtables-lock\") pod \"kindnet-9z7ks\" (UID: \"ecb0cb93-80cf-4699-8fe1-5da7367b2286\") " pod="kube-system/kindnet-9z7ks"
	Oct 29 09:34:46 no-preload-505993 kubelet[2030]: I1029 09:34:46.517412    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ecb0cb93-80cf-4699-8fe1-5da7367b2286-lib-modules\") pod \"kindnet-9z7ks\" (UID: \"ecb0cb93-80cf-4699-8fe1-5da7367b2286\") " pod="kube-system/kindnet-9z7ks"
	Oct 29 09:34:47 no-preload-505993 kubelet[2030]: E1029 09:34:47.517796    2030 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Oct 29 09:34:47 no-preload-505993 kubelet[2030]: E1029 09:34:47.517930    2030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d6b2c51-96d2-46e7-99af-e6a8f38f8fc6-kube-proxy podName:5d6b2c51-96d2-46e7-99af-e6a8f38f8fc6 nodeName:}" failed. No retries permitted until 2025-10-29 09:34:48.017905709 +0000 UTC m=+6.976386685 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/5d6b2c51-96d2-46e7-99af-e6a8f38f8fc6-kube-proxy") pod "kube-proxy-r6974" (UID: "5d6b2c51-96d2-46e7-99af-e6a8f38f8fc6") : failed to sync configmap cache: timed out waiting for the condition
	Oct 29 09:34:47 no-preload-505993 kubelet[2030]: I1029 09:34:47.542821    2030 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 29 09:34:48 no-preload-505993 kubelet[2030]: W1029 09:34:48.200230    2030 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d63baf692038c6771a74322ac553fcd847fb56f516f845444c48dff9a54d272a/crio-5fb62490e44228b1975ec22fcd9140ed330f0332a7c18fd5f97030201034ea9c WatchSource:0}: Error finding container 5fb62490e44228b1975ec22fcd9140ed330f0332a7c18fd5f97030201034ea9c: Status 404 returned error can't find the container with id 5fb62490e44228b1975ec22fcd9140ed330f0332a7c18fd5f97030201034ea9c
	Oct 29 09:34:49 no-preload-505993 kubelet[2030]: I1029 09:34:49.364109    2030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r6974" podStartSLOduration=3.364088903 podStartE2EDuration="3.364088903s" podCreationTimestamp="2025-10-29 09:34:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:34:49.340623651 +0000 UTC m=+8.299104626" watchObservedRunningTime="2025-10-29 09:34:49.364088903 +0000 UTC m=+8.322569879"
	Oct 29 09:34:51 no-preload-505993 kubelet[2030]: I1029 09:34:51.983973    2030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-9z7ks" podStartSLOduration=3.146762208 podStartE2EDuration="5.983944008s" podCreationTimestamp="2025-10-29 09:34:46 +0000 UTC" firstStartedPulling="2025-10-29 09:34:47.62265391 +0000 UTC m=+6.581134886" lastFinishedPulling="2025-10-29 09:34:50.45983571 +0000 UTC m=+9.418316686" observedRunningTime="2025-10-29 09:34:51.338600407 +0000 UTC m=+10.297081400" watchObservedRunningTime="2025-10-29 09:34:51.983944008 +0000 UTC m=+10.942424983"
	Oct 29 09:35:01 no-preload-505993 kubelet[2030]: I1029 09:35:01.151827    2030 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 29 09:35:01 no-preload-505993 kubelet[2030]: I1029 09:35:01.237702    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwvdw\" (UniqueName: \"kubernetes.io/projected/df9fb184-2e4c-40b9-8345-97ef36012e74-kube-api-access-nwvdw\") pod \"coredns-66bc5c9577-zpgms\" (UID: \"df9fb184-2e4c-40b9-8345-97ef36012e74\") " pod="kube-system/coredns-66bc5c9577-zpgms"
	Oct 29 09:35:01 no-preload-505993 kubelet[2030]: I1029 09:35:01.237761    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qrnb\" (UniqueName: \"kubernetes.io/projected/3b3fca69-516e-44b0-b831-1a8196bfe62b-kube-api-access-4qrnb\") pod \"storage-provisioner\" (UID: \"3b3fca69-516e-44b0-b831-1a8196bfe62b\") " pod="kube-system/storage-provisioner"
	Oct 29 09:35:01 no-preload-505993 kubelet[2030]: I1029 09:35:01.237785    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df9fb184-2e4c-40b9-8345-97ef36012e74-config-volume\") pod \"coredns-66bc5c9577-zpgms\" (UID: \"df9fb184-2e4c-40b9-8345-97ef36012e74\") " pod="kube-system/coredns-66bc5c9577-zpgms"
	Oct 29 09:35:01 no-preload-505993 kubelet[2030]: I1029 09:35:01.237804    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3b3fca69-516e-44b0-b831-1a8196bfe62b-tmp\") pod \"storage-provisioner\" (UID: \"3b3fca69-516e-44b0-b831-1a8196bfe62b\") " pod="kube-system/storage-provisioner"
	Oct 29 09:35:01 no-preload-505993 kubelet[2030]: W1029 09:35:01.521345    2030 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d63baf692038c6771a74322ac553fcd847fb56f516f845444c48dff9a54d272a/crio-3e7e5eb7ba13cdc417a99f0c9ac775cc16c179333ac48d0826f3ae26971bf867 WatchSource:0}: Error finding container 3e7e5eb7ba13cdc417a99f0c9ac775cc16c179333ac48d0826f3ae26971bf867: Status 404 returned error can't find the container with id 3e7e5eb7ba13cdc417a99f0c9ac775cc16c179333ac48d0826f3ae26971bf867
	Oct 29 09:35:01 no-preload-505993 kubelet[2030]: W1029 09:35:01.540858    2030 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d63baf692038c6771a74322ac553fcd847fb56f516f845444c48dff9a54d272a/crio-2fce9c4118534dab72f25d2fbbab048447d4542c451852f0683da7f716fad1f0 WatchSource:0}: Error finding container 2fce9c4118534dab72f25d2fbbab048447d4542c451852f0683da7f716fad1f0: Status 404 returned error can't find the container with id 2fce9c4118534dab72f25d2fbbab048447d4542c451852f0683da7f716fad1f0
	Oct 29 09:35:02 no-preload-505993 kubelet[2030]: I1029 09:35:02.381074    2030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.3810543 podStartE2EDuration="15.3810543s" podCreationTimestamp="2025-10-29 09:34:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:35:02.362980724 +0000 UTC m=+21.321461708" watchObservedRunningTime="2025-10-29 09:35:02.3810543 +0000 UTC m=+21.339535275"
	Oct 29 09:35:04 no-preload-505993 kubelet[2030]: I1029 09:35:04.567700    2030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zpgms" podStartSLOduration=18.567682392000002 podStartE2EDuration="18.567682392s" podCreationTimestamp="2025-10-29 09:34:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:35:02.381980519 +0000 UTC m=+21.340461503" watchObservedRunningTime="2025-10-29 09:35:04.567682392 +0000 UTC m=+23.526163368"
	Oct 29 09:35:04 no-preload-505993 kubelet[2030]: I1029 09:35:04.663847    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r84q\" (UniqueName: \"kubernetes.io/projected/e30fb005-524d-4e90-8800-e6ce95927686-kube-api-access-9r84q\") pod \"busybox\" (UID: \"e30fb005-524d-4e90-8800-e6ce95927686\") " pod="default/busybox"
	Oct 29 09:35:04 no-preload-505993 kubelet[2030]: W1029 09:35:04.896532    2030 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d63baf692038c6771a74322ac553fcd847fb56f516f845444c48dff9a54d272a/crio-25c238e75c80f4977e107312f8cfef7ec7bb018e915db671be39a3cd525dfb7e WatchSource:0}: Error finding container 25c238e75c80f4977e107312f8cfef7ec7bb018e915db671be39a3cd525dfb7e: Status 404 returned error can't find the container with id 25c238e75c80f4977e107312f8cfef7ec7bb018e915db671be39a3cd525dfb7e
	
	
	==> storage-provisioner [024fc38387e5e5574542d9885e1cba5cedc59056ba856fc8ff75f11057813655] <==
	I1029 09:35:01.605714       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1029 09:35:01.630716       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1029 09:35:01.630846       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1029 09:35:01.635059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:35:01.643146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:35:01.643375       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1029 09:35:01.643624       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-505993_b85589bf-91ab-41f7-bd51-74c7d3c1bfc5!
	W1029 09:35:01.646649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:35:01.649371       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cbaa88f5-db56-42e2-b30d-ab8c0d14deb0", APIVersion:"v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-505993_b85589bf-91ab-41f7-bd51-74c7d3c1bfc5 became leader
	W1029 09:35:01.681139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:35:01.746755       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-505993_b85589bf-91ab-41f7-bd51-74c7d3c1bfc5!
	W1029 09:35:03.683923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:35:03.688700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:35:05.693442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:35:05.704236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:35:07.706813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:35:07.711093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:35:09.714270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:35:09.718817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:35:11.722475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:35:11.727204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:35:13.730579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:35:13.735874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-505993 -n no-preload-505993
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-505993 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-946178 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-946178 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (303.17139ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:35:32Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-946178 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-946178 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-946178 describe deploy/metrics-server -n kube-system: exit status 1 (110.876593ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-946178 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-946178
helpers_test.go:243: (dbg) docker inspect embed-certs-946178:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b005fccf23a74438eb05b59784e74cfa3bc0ac4e930e3c2493cdca7d2b239691",
	        "Created": "2025-10-29T09:34:04.151290839Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 192276,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:34:04.225513845Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/b005fccf23a74438eb05b59784e74cfa3bc0ac4e930e3c2493cdca7d2b239691/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b005fccf23a74438eb05b59784e74cfa3bc0ac4e930e3c2493cdca7d2b239691/hostname",
	        "HostsPath": "/var/lib/docker/containers/b005fccf23a74438eb05b59784e74cfa3bc0ac4e930e3c2493cdca7d2b239691/hosts",
	        "LogPath": "/var/lib/docker/containers/b005fccf23a74438eb05b59784e74cfa3bc0ac4e930e3c2493cdca7d2b239691/b005fccf23a74438eb05b59784e74cfa3bc0ac4e930e3c2493cdca7d2b239691-json.log",
	        "Name": "/embed-certs-946178",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-946178:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-946178",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b005fccf23a74438eb05b59784e74cfa3bc0ac4e930e3c2493cdca7d2b239691",
	                "LowerDir": "/var/lib/docker/overlay2/0e4b8a36d03e2aa5ecd176b333f544932579c1dad010690bf16775b13c5b7cee-init/diff:/var/lib/docker/overlay2/512c003c31c6c889d7180677d0a7d04f9641d651bb7c0f829b95ca7e47c2836c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0e4b8a36d03e2aa5ecd176b333f544932579c1dad010690bf16775b13c5b7cee/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0e4b8a36d03e2aa5ecd176b333f544932579c1dad010690bf16775b13c5b7cee/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0e4b8a36d03e2aa5ecd176b333f544932579c1dad010690bf16775b13c5b7cee/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-946178",
	                "Source": "/var/lib/docker/volumes/embed-certs-946178/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-946178",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-946178",
	                "name.minikube.sigs.k8s.io": "embed-certs-946178",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4ef72e920e8a0a1dfefee969e1044a126669d017f44dbff7053c5aca4c8d8c5b",
	            "SandboxKey": "/var/run/docker/netns/4ef72e920e8a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-946178": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:80:86:0a:96:2e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "58e14a6bd5919ac00c4f79c5de1533110411df785cd7d398ccc05d5f98f62442",
	                    "EndpointID": "1070b2a4c9acd9592efc811af1d4664af6219c0c4708d7dce4ba130a49ab1d7c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-946178",
	                        "b005fccf23a7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-946178 -n embed-certs-946178
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-946178 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-946178 logs -n 25: (1.600873023s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ delete  │ -p cilium-937200                                                                                                                                                                                                                              │ cilium-937200            │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │ 29 Oct 25 09:29 UTC │
	│ start   │ -p cert-expiration-690444 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-690444   │ jenkins │ v1.37.0 │ 29 Oct 25 09:29 UTC │ 29 Oct 25 09:30 UTC │
	│ delete  │ -p force-systemd-env-116185                                                                                                                                                                                                                   │ force-systemd-env-116185 │ jenkins │ v1.37.0 │ 29 Oct 25 09:30 UTC │ 29 Oct 25 09:30 UTC │
	│ start   │ -p cert-options-699236 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-699236      │ jenkins │ v1.37.0 │ 29 Oct 25 09:30 UTC │ 29 Oct 25 09:31 UTC │
	│ ssh     │ cert-options-699236 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-699236      │ jenkins │ v1.37.0 │ 29 Oct 25 09:31 UTC │ 29 Oct 25 09:31 UTC │
	│ ssh     │ -p cert-options-699236 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-699236      │ jenkins │ v1.37.0 │ 29 Oct 25 09:31 UTC │ 29 Oct 25 09:31 UTC │
	│ delete  │ -p cert-options-699236                                                                                                                                                                                                                        │ cert-options-699236      │ jenkins │ v1.37.0 │ 29 Oct 25 09:31 UTC │ 29 Oct 25 09:31 UTC │
	│ start   │ -p old-k8s-version-162751 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:31 UTC │ 29 Oct 25 09:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-162751 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:32 UTC │                     │
	│ stop    │ -p old-k8s-version-162751 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:32 UTC │ 29 Oct 25 09:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-162751 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:32 UTC │ 29 Oct 25 09:32 UTC │
	│ start   │ -p old-k8s-version-162751 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:32 UTC │ 29 Oct 25 09:33 UTC │
	│ start   │ -p cert-expiration-690444 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-690444   │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ image   │ old-k8s-version-162751 image list --format=json                                                                                                                                                                                               │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ pause   │ -p old-k8s-version-162751 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │                     │
	│ delete  │ -p old-k8s-version-162751                                                                                                                                                                                                                     │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ delete  │ -p old-k8s-version-162751                                                                                                                                                                                                                     │ old-k8s-version-162751   │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ start   │ -p no-preload-505993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-505993        │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:35 UTC │
	│ delete  │ -p cert-expiration-690444                                                                                                                                                                                                                     │ cert-expiration-690444   │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ start   │ -p embed-certs-946178 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-946178       │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:35 UTC │
	│ addons  │ enable metrics-server -p no-preload-505993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-505993        │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │                     │
	│ stop    │ -p no-preload-505993 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-505993        │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:35 UTC │
	│ addons  │ enable dashboard -p no-preload-505993 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-505993        │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:35 UTC │
	│ start   │ -p no-preload-505993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-505993        │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-946178 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-946178       │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:35:27
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:35:27.646481  196421 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:35:27.646686  196421 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:35:27.646716  196421 out.go:374] Setting ErrFile to fd 2...
	I1029 09:35:27.646735  196421 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:35:27.647041  196421 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 09:35:27.647524  196421 out.go:368] Setting JSON to false
	I1029 09:35:27.648585  196421 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4680,"bootTime":1761725848,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1029 09:35:27.648688  196421 start.go:143] virtualization:  
	I1029 09:35:27.652014  196421 out.go:179] * [no-preload-505993] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1029 09:35:27.654375  196421 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:35:27.654449  196421 notify.go:221] Checking for updates...
	I1029 09:35:27.660771  196421 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:35:27.663835  196421 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:35:27.666801  196421 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	I1029 09:35:27.669649  196421 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1029 09:35:27.672622  196421 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:35:27.676013  196421 config.go:182] Loaded profile config "no-preload-505993": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:35:27.676666  196421 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:35:27.703289  196421 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1029 09:35:27.703402  196421 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:35:27.771149  196421 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-29 09:35:27.761057243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:35:27.771264  196421 docker.go:319] overlay module found
	I1029 09:35:27.774366  196421 out.go:179] * Using the docker driver based on existing profile
	I1029 09:35:27.777301  196421 start.go:309] selected driver: docker
	I1029 09:35:27.777323  196421 start.go:930] validating driver "docker" against &{Name:no-preload-505993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-505993 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:35:27.777426  196421 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:35:27.778143  196421 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:35:27.833986  196421 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-29 09:35:27.825013795 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:35:27.834332  196421 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:35:27.834374  196421 cni.go:84] Creating CNI manager for ""
	I1029 09:35:27.834431  196421 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:35:27.834472  196421 start.go:353] cluster config:
	{Name:no-preload-505993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-505993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:35:27.839625  196421 out.go:179] * Starting "no-preload-505993" primary control-plane node in "no-preload-505993" cluster
	I1029 09:35:27.842677  196421 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:35:27.845735  196421 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:35:27.848653  196421 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:35:27.848758  196421 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:35:27.848815  196421 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/config.json ...
	I1029 09:35:27.849137  196421 cache.go:107] acquiring lock: {Name:mk03199f74fe9d37c7e8b48d1cb3739bfb8ac1e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:35:27.849228  196421 cache.go:115] /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1029 09:35:27.849242  196421 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 133.187µs
	I1029 09:35:27.849263  196421 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1029 09:35:27.849275  196421 cache.go:107] acquiring lock: {Name:mk4e4ace80ec8af79a0d54ae1ae6c0c5305e9d35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:35:27.849308  196421 cache.go:115] /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1029 09:35:27.849314  196421 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 41.018µs
	I1029 09:35:27.849320  196421 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1029 09:35:27.849329  196421 cache.go:107] acquiring lock: {Name:mk6acecb4ffa4be7bea02b31944cdeaba33b9735 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:35:27.849361  196421 cache.go:115] /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1029 09:35:27.849370  196421 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 42.199µs
	I1029 09:35:27.849377  196421 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1029 09:35:27.849385  196421 cache.go:107] acquiring lock: {Name:mkc68020c68872b037c27758f6fa1ba5e1df822a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:35:27.849417  196421 cache.go:115] /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1029 09:35:27.849426  196421 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 41.338µs
	I1029 09:35:27.849432  196421 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1029 09:35:27.849440  196421 cache.go:107] acquiring lock: {Name:mk4537d1d11c198c7fc28398b890edb643968347 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:35:27.849493  196421 cache.go:115] /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1029 09:35:27.849504  196421 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 64.428µs
	I1029 09:35:27.849515  196421 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1029 09:35:27.849529  196421 cache.go:107] acquiring lock: {Name:mk07f8c1327d43f3084a93b6891b8d49b559d7a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:35:27.849557  196421 cache.go:115] /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1029 09:35:27.849565  196421 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 37.744µs
	I1029 09:35:27.849576  196421 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1029 09:35:27.849585  196421 cache.go:107] acquiring lock: {Name:mkfed00f57228149608809e8737422356901d74f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:35:27.849615  196421 cache.go:115] /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1029 09:35:27.849621  196421 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 36.546µs
	I1029 09:35:27.849635  196421 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1029 09:35:27.849654  196421 cache.go:107] acquiring lock: {Name:mk6b9397d9ef9275b3284aa33dc1ee6c845b9afe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:35:27.849688  196421 cache.go:115] /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1029 09:35:27.849701  196421 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 48.337µs
	I1029 09:35:27.849707  196421 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21800-2763/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1029 09:35:27.849714  196421 cache.go:87] Successfully saved all images to host disk.
	I1029 09:35:27.872174  196421 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:35:27.872195  196421 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:35:27.872208  196421 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:35:27.872229  196421 start.go:360] acquireMachinesLock for no-preload-505993: {Name:mk6e4c74a00a71d7c46936f4a8de665487843123 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:35:27.872280  196421 start.go:364] duration metric: took 35.693µs to acquireMachinesLock for "no-preload-505993"
	I1029 09:35:27.872299  196421 start.go:96] Skipping create...Using existing machine configuration
	I1029 09:35:27.872304  196421 fix.go:54] fixHost starting: 
	I1029 09:35:27.872637  196421 cli_runner.go:164] Run: docker container inspect no-preload-505993 --format={{.State.Status}}
	I1029 09:35:27.889248  196421 fix.go:112] recreateIfNeeded on no-preload-505993: state=Stopped err=<nil>
	W1029 09:35:27.889290  196421 fix.go:138] unexpected machine state, will restart: <nil>
	I1029 09:35:27.892665  196421 out.go:252] * Restarting existing docker container for "no-preload-505993" ...
	I1029 09:35:27.892748  196421 cli_runner.go:164] Run: docker start no-preload-505993
	I1029 09:35:28.201601  196421 cli_runner.go:164] Run: docker container inspect no-preload-505993 --format={{.State.Status}}
	I1029 09:35:28.229578  196421 kic.go:430] container "no-preload-505993" state is running.
	I1029 09:35:28.229956  196421 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-505993
	I1029 09:35:28.266455  196421 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/config.json ...
	I1029 09:35:28.266763  196421 machine.go:94] provisionDockerMachine start ...
	I1029 09:35:28.266835  196421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-505993
	I1029 09:35:28.293222  196421 main.go:143] libmachine: Using SSH client type: native
	I1029 09:35:28.293542  196421 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1029 09:35:28.293557  196421 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 09:35:28.294165  196421 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1029 09:35:31.448367  196421 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-505993
	
	I1029 09:35:31.448391  196421 ubuntu.go:182] provisioning hostname "no-preload-505993"
	I1029 09:35:31.448454  196421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-505993
	I1029 09:35:31.467643  196421 main.go:143] libmachine: Using SSH client type: native
	I1029 09:35:31.467951  196421 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1029 09:35:31.467962  196421 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-505993 && echo "no-preload-505993" | sudo tee /etc/hostname
	I1029 09:35:31.633775  196421 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-505993
	
	I1029 09:35:31.633849  196421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-505993
	I1029 09:35:31.653252  196421 main.go:143] libmachine: Using SSH client type: native
	I1029 09:35:31.653583  196421 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1029 09:35:31.653604  196421 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-505993' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-505993/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-505993' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 09:35:31.804820  196421 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 09:35:31.804859  196421 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-2763/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-2763/.minikube}
	I1029 09:35:31.804881  196421 ubuntu.go:190] setting up certificates
	I1029 09:35:31.804899  196421 provision.go:84] configureAuth start
	I1029 09:35:31.804960  196421 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-505993
	I1029 09:35:31.822413  196421 provision.go:143] copyHostCerts
	I1029 09:35:31.822492  196421 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem, removing ...
	I1029 09:35:31.822507  196421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 09:35:31.822588  196421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem (1082 bytes)
	I1029 09:35:31.822694  196421 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem, removing ...
	I1029 09:35:31.822705  196421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 09:35:31.822736  196421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem (1123 bytes)
	I1029 09:35:31.822795  196421 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem, removing ...
	I1029 09:35:31.822838  196421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 09:35:31.822871  196421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem (1679 bytes)
	I1029 09:35:31.822927  196421 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem org=jenkins.no-preload-505993 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-505993]
	I1029 09:35:32.036711  196421 provision.go:177] copyRemoteCerts
	I1029 09:35:32.036780  196421 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 09:35:32.036824  196421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-505993
	I1029 09:35:32.055669  196421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/no-preload-505993/id_rsa Username:docker}
	I1029 09:35:32.164452  196421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 09:35:32.184804  196421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1029 09:35:32.209085  196421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 09:35:32.234090  196421 provision.go:87] duration metric: took 429.175393ms to configureAuth
	I1029 09:35:32.234120  196421 ubuntu.go:206] setting minikube options for container-runtime
	I1029 09:35:32.234315  196421 config.go:182] Loaded profile config "no-preload-505993": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:35:32.234429  196421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-505993
	I1029 09:35:32.256158  196421 main.go:143] libmachine: Using SSH client type: native
	I1029 09:35:32.256473  196421 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1029 09:35:32.256490  196421 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	
	
	==> CRI-O <==
	Oct 29 09:35:20 embed-certs-946178 crio[841]: time="2025-10-29T09:35:20.614743781Z" level=info msg="Created container 2dd1839b5de5447e312c84ba312728d68499993e70a798674074e8a4fd2acdd2: kube-system/coredns-66bc5c9577-fszff/coredns" id=a78a1074-6b85-4a2a-a03c-c0ed8b385228 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:35:20 embed-certs-946178 crio[841]: time="2025-10-29T09:35:20.615525737Z" level=info msg="Starting container: 2dd1839b5de5447e312c84ba312728d68499993e70a798674074e8a4fd2acdd2" id=a91a2ea6-51d9-46ba-b29e-f27fef03cc8c name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:35:20 embed-certs-946178 crio[841]: time="2025-10-29T09:35:20.617652089Z" level=info msg="Started container" PID=1714 containerID=2dd1839b5de5447e312c84ba312728d68499993e70a798674074e8a4fd2acdd2 description=kube-system/coredns-66bc5c9577-fszff/coredns id=a91a2ea6-51d9-46ba-b29e-f27fef03cc8c name=/runtime.v1.RuntimeService/StartContainer sandboxID=37ce8ce31d2272979cd97cf76842900ca246e84040d0bd4d1fe72b2f49401fcc
	Oct 29 09:35:23 embed-certs-946178 crio[841]: time="2025-10-29T09:35:23.50385595Z" level=info msg="Running pod sandbox: default/busybox/POD" id=79ced0db-0a44-46f2-b1bf-6c778f3f5edb name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:35:23 embed-certs-946178 crio[841]: time="2025-10-29T09:35:23.503932406Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:35:23 embed-certs-946178 crio[841]: time="2025-10-29T09:35:23.510762767Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5e77a862623e99105ec1aa858bdea7e36975ff41ebc8de3f92c3249675865b44 UID:7fa0339a-3020-460c-8bb9-421556d3e0d5 NetNS:/var/run/netns/9cb759ab-5964-4358-a8fd-3da19fbc02ad Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000b04b0}] Aliases:map[]}"
	Oct 29 09:35:23 embed-certs-946178 crio[841]: time="2025-10-29T09:35:23.510813565Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 29 09:35:23 embed-certs-946178 crio[841]: time="2025-10-29T09:35:23.519770566Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5e77a862623e99105ec1aa858bdea7e36975ff41ebc8de3f92c3249675865b44 UID:7fa0339a-3020-460c-8bb9-421556d3e0d5 NetNS:/var/run/netns/9cb759ab-5964-4358-a8fd-3da19fbc02ad Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000b04b0}] Aliases:map[]}"
	Oct 29 09:35:23 embed-certs-946178 crio[841]: time="2025-10-29T09:35:23.519920902Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 29 09:35:23 embed-certs-946178 crio[841]: time="2025-10-29T09:35:23.522971932Z" level=info msg="Ran pod sandbox 5e77a862623e99105ec1aa858bdea7e36975ff41ebc8de3f92c3249675865b44 with infra container: default/busybox/POD" id=79ced0db-0a44-46f2-b1bf-6c778f3f5edb name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:35:23 embed-certs-946178 crio[841]: time="2025-10-29T09:35:23.524886615Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e734ef47-22f3-4157-b5c2-34095bf1bd89 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:35:23 embed-certs-946178 crio[841]: time="2025-10-29T09:35:23.525229797Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e734ef47-22f3-4157-b5c2-34095bf1bd89 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:35:23 embed-certs-946178 crio[841]: time="2025-10-29T09:35:23.525368154Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e734ef47-22f3-4157-b5c2-34095bf1bd89 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:35:23 embed-certs-946178 crio[841]: time="2025-10-29T09:35:23.528359211Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=da8ae5b0-b03c-487c-af06-bcd2bed48fcb name=/runtime.v1.ImageService/PullImage
	Oct 29 09:35:23 embed-certs-946178 crio[841]: time="2025-10-29T09:35:23.533891518Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 29 09:35:25 embed-certs-946178 crio[841]: time="2025-10-29T09:35:25.728607845Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=da8ae5b0-b03c-487c-af06-bcd2bed48fcb name=/runtime.v1.ImageService/PullImage
	Oct 29 09:35:25 embed-certs-946178 crio[841]: time="2025-10-29T09:35:25.729914186Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ecd9cbd9-9434-4a6f-bfe6-53cd9fd96f91 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:35:25 embed-certs-946178 crio[841]: time="2025-10-29T09:35:25.731704388Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6f1bc23f-6420-4eef-b4e2-51c34f9edaf0 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:35:25 embed-certs-946178 crio[841]: time="2025-10-29T09:35:25.738624211Z" level=info msg="Creating container: default/busybox/busybox" id=f74feb76-d5ed-46a4-9aa3-d75daa50c002 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:35:25 embed-certs-946178 crio[841]: time="2025-10-29T09:35:25.738747001Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:35:25 embed-certs-946178 crio[841]: time="2025-10-29T09:35:25.744486809Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:35:25 embed-certs-946178 crio[841]: time="2025-10-29T09:35:25.745152752Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:35:25 embed-certs-946178 crio[841]: time="2025-10-29T09:35:25.762230037Z" level=info msg="Created container 02e50a329c137c3e56d5da000fddc3ecc57c54186fabfbcd7f4438df9cfe99b9: default/busybox/busybox" id=f74feb76-d5ed-46a4-9aa3-d75daa50c002 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:35:25 embed-certs-946178 crio[841]: time="2025-10-29T09:35:25.764824086Z" level=info msg="Starting container: 02e50a329c137c3e56d5da000fddc3ecc57c54186fabfbcd7f4438df9cfe99b9" id=51dede1e-c3fa-4cf6-a7a9-46b589768557 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:35:25 embed-certs-946178 crio[841]: time="2025-10-29T09:35:25.76755709Z" level=info msg="Started container" PID=1767 containerID=02e50a329c137c3e56d5da000fddc3ecc57c54186fabfbcd7f4438df9cfe99b9 description=default/busybox/busybox id=51dede1e-c3fa-4cf6-a7a9-46b589768557 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5e77a862623e99105ec1aa858bdea7e36975ff41ebc8de3f92c3249675865b44
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	02e50a329c137       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   5e77a862623e9       busybox                                      default
	2dd1839b5de54       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   37ce8ce31d227       coredns-66bc5c9577-fszff                     kube-system
	3e808ed41d777       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   7d8413fb020bc       storage-provisioner                          kube-system
	5d3ff4316c065       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   9079b4230df78       kindnet-8lf6r                                kube-system
	ab2dcffecf511       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      54 seconds ago       Running             kube-proxy                0                   a0ac6914fc0ef       kube-proxy-8zwf2                             kube-system
	9f3a71fff14ba       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   7ef25890b0f18       etcd-embed-certs-946178                      kube-system
	93d7a3bd2c90a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   e6d420bf0e563       kube-apiserver-embed-certs-946178            kube-system
	3b67413352d9a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   db9ffb2f70ba6       kube-controller-manager-embed-certs-946178   kube-system
	aa139a50ebe25       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   68aed7a3eebed       kube-scheduler-embed-certs-946178            kube-system
	
	
	==> coredns [2dd1839b5de5447e312c84ba312728d68499993e70a798674074e8a4fd2acdd2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38318 - 65490 "HINFO IN 4806488176322167400.626953516520267387. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.004583457s
	
	
	==> describe nodes <==
	Name:               embed-certs-946178
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-946178
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=embed-certs-946178
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_34_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:34:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-946178
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:35:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:35:20 +0000   Wed, 29 Oct 2025 09:34:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:35:20 +0000   Wed, 29 Oct 2025 09:34:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:35:20 +0000   Wed, 29 Oct 2025 09:34:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 09:35:20 +0000   Wed, 29 Oct 2025 09:35:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-946178
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                3602b941-fa8a-4d9a-9349-a96421b2f60b
	  Boot ID:                    dcb5a4aa-84b0-4edf-aeb7-a96ccf6c0882
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-fszff                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-embed-certs-946178                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         60s
	  kube-system                 kindnet-8lf6r                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-embed-certs-946178             250m (12%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-embed-certs-946178    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-8zwf2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-embed-certs-946178             100m (5%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   Starting                 71s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 71s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  71s (x8 over 71s)  kubelet          Node embed-certs-946178 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    71s (x8 over 71s)  kubelet          Node embed-certs-946178 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     71s (x8 over 71s)  kubelet          Node embed-certs-946178 status is now: NodeHasSufficientPID
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s                kubelet          Node embed-certs-946178 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s                kubelet          Node embed-certs-946178 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s                kubelet          Node embed-certs-946178 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node embed-certs-946178 event: Registered Node embed-certs-946178 in Controller
	  Normal   NodeReady                14s                kubelet          Node embed-certs-946178 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct29 09:06] overlayfs: idmapped layers are currently not supported
	[Oct29 09:07] overlayfs: idmapped layers are currently not supported
	[Oct29 09:08] overlayfs: idmapped layers are currently not supported
	[Oct29 09:10] overlayfs: idmapped layers are currently not supported
	[ +24.018500] overlayfs: idmapped layers are currently not supported
	[  +4.070732] overlayfs: idmapped layers are currently not supported
	[Oct29 09:11] overlayfs: idmapped layers are currently not supported
	[ +18.424492] overlayfs: idmapped layers are currently not supported
	[  +4.342269] hrtimer: interrupt took 2289025 ns
	[Oct29 09:12] overlayfs: idmapped layers are currently not supported
	[Oct29 09:13] overlayfs: idmapped layers are currently not supported
	[Oct29 09:14] overlayfs: idmapped layers are currently not supported
	[Oct29 09:20] overlayfs: idmapped layers are currently not supported
	[Oct29 09:23] overlayfs: idmapped layers are currently not supported
	[Oct29 09:24] overlayfs: idmapped layers are currently not supported
	[ +30.917844] overlayfs: idmapped layers are currently not supported
	[Oct29 09:27] overlayfs: idmapped layers are currently not supported
	[Oct29 09:29] overlayfs: idmapped layers are currently not supported
	[Oct29 09:30] overlayfs: idmapped layers are currently not supported
	[  +5.608805] overlayfs: idmapped layers are currently not supported
	[ +37.422429] overlayfs: idmapped layers are currently not supported
	[Oct29 09:31] overlayfs: idmapped layers are currently not supported
	[Oct29 09:32] overlayfs: idmapped layers are currently not supported
	[Oct29 09:34] overlayfs: idmapped layers are currently not supported
	[ +22.728709] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9f3a71fff14bac5f1f8da51cdb5d1720f3d540d8746be2df122a31ca01097a4b] <==
	{"level":"warn","ts":"2025-10-29T09:34:28.564386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:28.595412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:28.637030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:28.658826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:28.692998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:28.702962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:28.731748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:28.755724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:28.800045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:28.845579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:28.872438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:28.908829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:28.935499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:28.979567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:28.990411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:29.027792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:29.056436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:29.071240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:29.094830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:29.133652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:29.156064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:29.190886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:29.207826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:29.257493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:34:29.409803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55974","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:35:34 up  1:18,  0 user,  load average: 3.21, 3.56, 2.76
	Linux embed-certs-946178 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5d3ff4316c065bbf7f7746ef6068a8c154ea8c01d43ef0d217c1c30bdb67a362] <==
	I1029 09:34:39.549437       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:34:39.549943       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1029 09:34:39.550170       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:34:39.550204       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:34:39.550227       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:34:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:34:39.748296       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:34:39.748699       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:34:39.748745       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:34:39.751811       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1029 09:35:09.749061       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1029 09:35:09.751592       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1029 09:35:09.751609       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1029 09:35:09.751886       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1029 09:35:11.349718       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 09:35:11.349784       1 metrics.go:72] Registering metrics
	I1029 09:35:11.349839       1 controller.go:711] "Syncing nftables rules"
	I1029 09:35:19.748448       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1029 09:35:19.748489       1 main.go:301] handling current node
	I1029 09:35:29.748546       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1029 09:35:29.748623       1 main.go:301] handling current node
	
	
	==> kube-apiserver [93d7a3bd2c90a43401ae77163a2dd7f237b9864799768f5adcaa22fea87eca38] <==
	I1029 09:34:30.622486       1 cache.go:39] Caches are synced for autoregister controller
	I1029 09:34:30.630568       1 controller.go:667] quota admission added evaluator for: namespaces
	I1029 09:34:30.669827       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1029 09:34:30.670056       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:34:30.676026       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1029 09:34:30.711891       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:34:30.711959       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1029 09:34:31.042977       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1029 09:34:31.073090       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1029 09:34:31.073121       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:34:32.258209       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:34:32.327278       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 09:34:32.429656       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 09:34:32.431265       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1029 09:34:32.454074       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1029 09:34:32.456134       1 controller.go:667] quota admission added evaluator for: endpoints
	I1029 09:34:32.462877       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1029 09:34:33.703233       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1029 09:34:33.744334       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1029 09:34:33.769119       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1029 09:34:38.378554       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1029 09:34:38.603836       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:34:38.609595       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1029 09:34:38.663434       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1029 09:35:32.354104       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:32892: use of closed network connection
	
	
	==> kube-controller-manager [3b67413352d9a8aeb8dc642a1fd58714cbf9e4a4e6902ff2670e1449f634df07] <==
	I1029 09:34:37.619697       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1029 09:34:37.621226       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1029 09:34:37.643727       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1029 09:34:37.643841       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1029 09:34:37.652255       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-946178" podCIDRs=["10.244.0.0/24"]
	I1029 09:34:37.652751       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:34:37.656557       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1029 09:34:37.666937       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1029 09:34:37.667288       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:34:37.667683       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1029 09:34:37.668402       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-946178"
	I1029 09:34:37.668571       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1029 09:34:37.674619       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1029 09:34:37.714884       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:34:37.714984       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1029 09:34:37.715016       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1029 09:34:37.717451       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1029 09:34:37.717583       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1029 09:34:37.718763       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1029 09:34:37.718873       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1029 09:34:37.725160       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1029 09:34:37.733198       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1029 09:34:37.733778       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1029 09:34:37.738018       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:35:22.762992       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ab2dcffecf5117554ddd4ec0a014fa591247ea3e1caf1b2858e0a9e965daf285] <==
	I1029 09:34:39.536605       1 server_linux.go:53] "Using iptables proxy"
	I1029 09:34:39.642178       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 09:34:39.742870       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 09:34:39.742905       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1029 09:34:39.742988       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 09:34:39.862985       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:34:39.863104       1 server_linux.go:132] "Using iptables Proxier"
	I1029 09:34:39.867807       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 09:34:39.868458       1 server.go:527] "Version info" version="v1.34.1"
	I1029 09:34:39.868478       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:34:39.869713       1 config.go:200] "Starting service config controller"
	I1029 09:34:39.869722       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 09:34:39.869736       1 config.go:106] "Starting endpoint slice config controller"
	I1029 09:34:39.869741       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 09:34:39.869758       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 09:34:39.869761       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 09:34:39.870370       1 config.go:309] "Starting node config controller"
	I1029 09:34:39.870377       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 09:34:39.870382       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 09:34:39.970208       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 09:34:39.970306       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 09:34:39.970333       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [aa139a50ebe2565aa5b7227f0344eae59ad84ae9703c7b635c4f77875d4dd5a1] <==
	E1029 09:34:30.723884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1029 09:34:30.723933       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1029 09:34:30.754320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1029 09:34:30.757153       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1029 09:34:30.757300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1029 09:34:30.757393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1029 09:34:30.757501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1029 09:34:30.757599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1029 09:34:30.759439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1029 09:34:30.759518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1029 09:34:30.759603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1029 09:34:30.759658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1029 09:34:30.759713       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1029 09:34:30.759772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1029 09:34:30.759897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1029 09:34:30.759957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1029 09:34:30.760029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1029 09:34:31.590882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1029 09:34:31.786640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1029 09:34:31.795658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1029 09:34:31.838077       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1029 09:34:31.853448       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1029 09:34:31.871122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1029 09:34:32.019749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1029 09:34:35.286316       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 09:34:35 embed-certs-946178 kubelet[1300]: I1029 09:34:35.172913    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-946178" podStartSLOduration=1.172894092 podStartE2EDuration="1.172894092s" podCreationTimestamp="2025-10-29 09:34:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:34:35.105658356 +0000 UTC m=+1.452425857" watchObservedRunningTime="2025-10-29 09:34:35.172894092 +0000 UTC m=+1.519661593"
	Oct 29 09:34:37 embed-certs-946178 kubelet[1300]: I1029 09:34:37.750665    1300 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 29 09:34:37 embed-certs-946178 kubelet[1300]: I1029 09:34:37.751925    1300 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 29 09:34:38 embed-certs-946178 kubelet[1300]: I1029 09:34:38.676626    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3571c7e9-109e-43ac-8a13-fbf0f2c0b2f2-kube-proxy\") pod \"kube-proxy-8zwf2\" (UID: \"3571c7e9-109e-43ac-8a13-fbf0f2c0b2f2\") " pod="kube-system/kube-proxy-8zwf2"
	Oct 29 09:34:38 embed-certs-946178 kubelet[1300]: I1029 09:34:38.676676    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3571c7e9-109e-43ac-8a13-fbf0f2c0b2f2-lib-modules\") pod \"kube-proxy-8zwf2\" (UID: \"3571c7e9-109e-43ac-8a13-fbf0f2c0b2f2\") " pod="kube-system/kube-proxy-8zwf2"
	Oct 29 09:34:38 embed-certs-946178 kubelet[1300]: I1029 09:34:38.676699    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6rr2\" (UniqueName: \"kubernetes.io/projected/3571c7e9-109e-43ac-8a13-fbf0f2c0b2f2-kube-api-access-x6rr2\") pod \"kube-proxy-8zwf2\" (UID: \"3571c7e9-109e-43ac-8a13-fbf0f2c0b2f2\") " pod="kube-system/kube-proxy-8zwf2"
	Oct 29 09:34:38 embed-certs-946178 kubelet[1300]: I1029 09:34:38.676723    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3571c7e9-109e-43ac-8a13-fbf0f2c0b2f2-xtables-lock\") pod \"kube-proxy-8zwf2\" (UID: \"3571c7e9-109e-43ac-8a13-fbf0f2c0b2f2\") " pod="kube-system/kube-proxy-8zwf2"
	Oct 29 09:34:38 embed-certs-946178 kubelet[1300]: I1029 09:34:38.780196    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/67b8d2ab-954a-4f88-9ef0-fd96b500d79d-cni-cfg\") pod \"kindnet-8lf6r\" (UID: \"67b8d2ab-954a-4f88-9ef0-fd96b500d79d\") " pod="kube-system/kindnet-8lf6r"
	Oct 29 09:34:38 embed-certs-946178 kubelet[1300]: I1029 09:34:38.780272    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg9p6\" (UniqueName: \"kubernetes.io/projected/67b8d2ab-954a-4f88-9ef0-fd96b500d79d-kube-api-access-tg9p6\") pod \"kindnet-8lf6r\" (UID: \"67b8d2ab-954a-4f88-9ef0-fd96b500d79d\") " pod="kube-system/kindnet-8lf6r"
	Oct 29 09:34:38 embed-certs-946178 kubelet[1300]: I1029 09:34:38.780295    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67b8d2ab-954a-4f88-9ef0-fd96b500d79d-xtables-lock\") pod \"kindnet-8lf6r\" (UID: \"67b8d2ab-954a-4f88-9ef0-fd96b500d79d\") " pod="kube-system/kindnet-8lf6r"
	Oct 29 09:34:38 embed-certs-946178 kubelet[1300]: I1029 09:34:38.780352    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67b8d2ab-954a-4f88-9ef0-fd96b500d79d-lib-modules\") pod \"kindnet-8lf6r\" (UID: \"67b8d2ab-954a-4f88-9ef0-fd96b500d79d\") " pod="kube-system/kindnet-8lf6r"
	Oct 29 09:34:38 embed-certs-946178 kubelet[1300]: I1029 09:34:38.876220    1300 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 29 09:34:39 embed-certs-946178 kubelet[1300]: W1029 09:34:39.187572    1300 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b005fccf23a74438eb05b59784e74cfa3bc0ac4e930e3c2493cdca7d2b239691/crio-a0ac6914fc0efdd4470fe6da3f315a409a817225839ccdbed0b75d262078b074 WatchSource:0}: Error finding container a0ac6914fc0efdd4470fe6da3f315a409a817225839ccdbed0b75d262078b074: Status 404 returned error can't find the container with id a0ac6914fc0efdd4470fe6da3f315a409a817225839ccdbed0b75d262078b074
	Oct 29 09:34:39 embed-certs-946178 kubelet[1300]: W1029 09:34:39.256787    1300 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b005fccf23a74438eb05b59784e74cfa3bc0ac4e930e3c2493cdca7d2b239691/crio-9079b4230df78c3e31f76c66aaebd169a58797b85ce8b583533295942097c114 WatchSource:0}: Error finding container 9079b4230df78c3e31f76c66aaebd169a58797b85ce8b583533295942097c114: Status 404 returned error can't find the container with id 9079b4230df78c3e31f76c66aaebd169a58797b85ce8b583533295942097c114
	Oct 29 09:34:40 embed-certs-946178 kubelet[1300]: I1029 09:34:40.268942    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8zwf2" podStartSLOduration=2.268923633 podStartE2EDuration="2.268923633s" podCreationTimestamp="2025-10-29 09:34:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:34:40.236095067 +0000 UTC m=+6.582862576" watchObservedRunningTime="2025-10-29 09:34:40.268923633 +0000 UTC m=+6.615691133"
	Oct 29 09:34:40 embed-certs-946178 kubelet[1300]: I1029 09:34:40.269073    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-8lf6r" podStartSLOduration=2.269057106 podStartE2EDuration="2.269057106s" podCreationTimestamp="2025-10-29 09:34:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:34:40.268608372 +0000 UTC m=+6.615375890" watchObservedRunningTime="2025-10-29 09:34:40.269057106 +0000 UTC m=+6.615824607"
	Oct 29 09:35:20 embed-certs-946178 kubelet[1300]: I1029 09:35:20.174425    1300 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 29 09:35:20 embed-certs-946178 kubelet[1300]: I1029 09:35:20.281080    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b2401761-29ab-456b-9542-f90d10c5c3dd-tmp\") pod \"storage-provisioner\" (UID: \"b2401761-29ab-456b-9542-f90d10c5c3dd\") " pod="kube-system/storage-provisioner"
	Oct 29 09:35:20 embed-certs-946178 kubelet[1300]: I1029 09:35:20.281369    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20eec5cd-ff72-435d-8bf3-186261f7029b-config-volume\") pod \"coredns-66bc5c9577-fszff\" (UID: \"20eec5cd-ff72-435d-8bf3-186261f7029b\") " pod="kube-system/coredns-66bc5c9577-fszff"
	Oct 29 09:35:20 embed-certs-946178 kubelet[1300]: I1029 09:35:20.281416    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ffhf\" (UniqueName: \"kubernetes.io/projected/20eec5cd-ff72-435d-8bf3-186261f7029b-kube-api-access-2ffhf\") pod \"coredns-66bc5c9577-fszff\" (UID: \"20eec5cd-ff72-435d-8bf3-186261f7029b\") " pod="kube-system/coredns-66bc5c9577-fszff"
	Oct 29 09:35:20 embed-certs-946178 kubelet[1300]: I1029 09:35:20.281444    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjnnd\" (UniqueName: \"kubernetes.io/projected/b2401761-29ab-456b-9542-f90d10c5c3dd-kube-api-access-pjnnd\") pod \"storage-provisioner\" (UID: \"b2401761-29ab-456b-9542-f90d10c5c3dd\") " pod="kube-system/storage-provisioner"
	Oct 29 09:35:20 embed-certs-946178 kubelet[1300]: W1029 09:35:20.562678    1300 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b005fccf23a74438eb05b59784e74cfa3bc0ac4e930e3c2493cdca7d2b239691/crio-37ce8ce31d2272979cd97cf76842900ca246e84040d0bd4d1fe72b2f49401fcc WatchSource:0}: Error finding container 37ce8ce31d2272979cd97cf76842900ca246e84040d0bd4d1fe72b2f49401fcc: Status 404 returned error can't find the container with id 37ce8ce31d2272979cd97cf76842900ca246e84040d0bd4d1fe72b2f49401fcc
	Oct 29 09:35:21 embed-certs-946178 kubelet[1300]: I1029 09:35:21.275672    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-fszff" podStartSLOduration=43.275651688 podStartE2EDuration="43.275651688s" podCreationTimestamp="2025-10-29 09:34:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:35:21.259466735 +0000 UTC m=+47.606234244" watchObservedRunningTime="2025-10-29 09:35:21.275651688 +0000 UTC m=+47.622419189"
	Oct 29 09:35:21 embed-certs-946178 kubelet[1300]: I1029 09:35:21.276398    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.27638625 podStartE2EDuration="41.27638625s" podCreationTimestamp="2025-10-29 09:34:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:35:21.273734781 +0000 UTC m=+47.620502290" watchObservedRunningTime="2025-10-29 09:35:21.27638625 +0000 UTC m=+47.623153751"
	Oct 29 09:35:23 embed-certs-946178 kubelet[1300]: I1029 09:35:23.298603    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k6kn\" (UniqueName: \"kubernetes.io/projected/7fa0339a-3020-460c-8bb9-421556d3e0d5-kube-api-access-5k6kn\") pod \"busybox\" (UID: \"7fa0339a-3020-460c-8bb9-421556d3e0d5\") " pod="default/busybox"
	
	
	==> storage-provisioner [3e808ed41d7775752c31c07f87b859d103dc14a41b7daa187ea10d42837e4592] <==
	I1029 09:35:20.607096       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1029 09:35:20.628114       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1029 09:35:20.628229       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1029 09:35:20.632808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:35:20.640289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:35:20.640540       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1029 09:35:20.641001       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e7498ccd-b53a-40e1-924d-4377223b536f", APIVersion:"v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-946178_bcc1e47e-3aa4-4c19-a71a-9d6996e0bce2 became leader
	I1029 09:35:20.642995       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-946178_bcc1e47e-3aa4-4c19-a71a-9d6996e0bce2!
	W1029 09:35:20.655543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:35:20.658909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:35:20.743162       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-946178_bcc1e47e-3aa4-4c19-a71a-9d6996e0bce2!
	W1029 09:35:22.661833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:35:22.668254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:35:24.671536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:35:24.678437       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:35:26.681348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:35:26.688237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:35:28.692144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:35:28.702620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:35:30.705912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:35:30.710151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:35:32.713020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:35:32.718325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:35:34.723108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:35:34.735392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-946178 -n embed-certs-946178
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-946178 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-505993 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-505993 --alsologtostderr -v=1: exit status 80 (1.976817789s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-505993 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 09:36:32.420056  201454 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:36:32.420269  201454 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:36:32.420300  201454 out.go:374] Setting ErrFile to fd 2...
	I1029 09:36:32.420353  201454 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:36:32.420615  201454 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 09:36:32.420903  201454 out.go:368] Setting JSON to false
	I1029 09:36:32.420956  201454 mustload.go:66] Loading cluster: no-preload-505993
	I1029 09:36:32.421356  201454 config.go:182] Loaded profile config "no-preload-505993": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:36:32.421841  201454 cli_runner.go:164] Run: docker container inspect no-preload-505993 --format={{.State.Status}}
	I1029 09:36:32.441249  201454 host.go:66] Checking if "no-preload-505993" exists ...
	I1029 09:36:32.441590  201454 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:36:32.505088  201454 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-29 09:36:32.495184834 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:36:32.505802  201454 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-505993 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1029 09:36:32.509261  201454 out.go:179] * Pausing node no-preload-505993 ... 
	I1029 09:36:32.512969  201454 host.go:66] Checking if "no-preload-505993" exists ...
	I1029 09:36:32.513318  201454 ssh_runner.go:195] Run: systemctl --version
	I1029 09:36:32.513367  201454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-505993
	I1029 09:36:32.530990  201454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/no-preload-505993/id_rsa Username:docker}
	I1029 09:36:32.635070  201454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:36:32.654000  201454 pause.go:52] kubelet running: true
	I1029 09:36:32.654124  201454 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:36:32.931556  201454 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:36:32.931647  201454 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:36:33.015498  201454 cri.go:89] found id: "6c858fe9343d518a3734dba79545bb4f9be5a1caad65608525b2cfbdc1cb354e"
	I1029 09:36:33.015570  201454 cri.go:89] found id: "fb26471a716c99b053f80007a29cfb0be111d8091a8005d0e65f204374cad040"
	I1029 09:36:33.015592  201454 cri.go:89] found id: "3d1bab9263ded9097697406f9949289734bffab4265f224193d85be8901fec23"
	I1029 09:36:33.015614  201454 cri.go:89] found id: "2431d11a99c398651365afb64f2024dc94310b6991d7f607080b149d3ed50e0d"
	I1029 09:36:33.015649  201454 cri.go:89] found id: "121378d1386ec391bb77c32c4dcdf7ab70266c9bb6c9219a6be7ff9d90b0f763"
	I1029 09:36:33.015672  201454 cri.go:89] found id: "f23cc204350b0e724d2d7de7e25812962bbbce24b9a5a7e022bc727f6a80b18c"
	I1029 09:36:33.015691  201454 cri.go:89] found id: "d0806a4c4d5e1ea92918f9224d777f2c3e94d25f663aaf70d5e9b0de3f5f3797"
	I1029 09:36:33.015714  201454 cri.go:89] found id: "dde28c2b4cc40af65ac06f06ec71c70d2e4934a002e393f3f98a4ea31fa0d591"
	I1029 09:36:33.015734  201454 cri.go:89] found id: "de2e4bbcf70fb4e2b145da2e1eeeb3965129da682b985d428d7db1b5c139f9ac"
	I1029 09:36:33.015771  201454 cri.go:89] found id: "0b803e843ffd62b30959b23b502469c6d39c63220ac980f6e8eb563b723110eb"
	I1029 09:36:33.015790  201454 cri.go:89] found id: "c16295e57713c6fa7f970f1681964ea937293af9f41f054bac638fc23c2a75e1"
	I1029 09:36:33.015814  201454 cri.go:89] found id: ""
	I1029 09:36:33.015898  201454 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:36:33.028403  201454 retry.go:31] will retry after 200.160065ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:36:33Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:36:33.228777  201454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:36:33.249345  201454 pause.go:52] kubelet running: false
	I1029 09:36:33.249446  201454 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:36:33.444748  201454 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:36:33.444905  201454 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:36:33.535369  201454 cri.go:89] found id: "6c858fe9343d518a3734dba79545bb4f9be5a1caad65608525b2cfbdc1cb354e"
	I1029 09:36:33.535398  201454 cri.go:89] found id: "fb26471a716c99b053f80007a29cfb0be111d8091a8005d0e65f204374cad040"
	I1029 09:36:33.535404  201454 cri.go:89] found id: "3d1bab9263ded9097697406f9949289734bffab4265f224193d85be8901fec23"
	I1029 09:36:33.535407  201454 cri.go:89] found id: "2431d11a99c398651365afb64f2024dc94310b6991d7f607080b149d3ed50e0d"
	I1029 09:36:33.535411  201454 cri.go:89] found id: "121378d1386ec391bb77c32c4dcdf7ab70266c9bb6c9219a6be7ff9d90b0f763"
	I1029 09:36:33.535414  201454 cri.go:89] found id: "f23cc204350b0e724d2d7de7e25812962bbbce24b9a5a7e022bc727f6a80b18c"
	I1029 09:36:33.535417  201454 cri.go:89] found id: "d0806a4c4d5e1ea92918f9224d777f2c3e94d25f663aaf70d5e9b0de3f5f3797"
	I1029 09:36:33.535420  201454 cri.go:89] found id: "dde28c2b4cc40af65ac06f06ec71c70d2e4934a002e393f3f98a4ea31fa0d591"
	I1029 09:36:33.535425  201454 cri.go:89] found id: "de2e4bbcf70fb4e2b145da2e1eeeb3965129da682b985d428d7db1b5c139f9ac"
	I1029 09:36:33.535437  201454 cri.go:89] found id: "0b803e843ffd62b30959b23b502469c6d39c63220ac980f6e8eb563b723110eb"
	I1029 09:36:33.535443  201454 cri.go:89] found id: "c16295e57713c6fa7f970f1681964ea937293af9f41f054bac638fc23c2a75e1"
	I1029 09:36:33.535447  201454 cri.go:89] found id: ""
	I1029 09:36:33.535495  201454 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:36:33.547061  201454 retry.go:31] will retry after 464.632652ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:36:33Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:36:34.012535  201454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:36:34.026746  201454 pause.go:52] kubelet running: false
	I1029 09:36:34.026856  201454 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:36:34.221858  201454 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:36:34.221946  201454 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:36:34.311410  201454 cri.go:89] found id: "6c858fe9343d518a3734dba79545bb4f9be5a1caad65608525b2cfbdc1cb354e"
	I1029 09:36:34.311431  201454 cri.go:89] found id: "fb26471a716c99b053f80007a29cfb0be111d8091a8005d0e65f204374cad040"
	I1029 09:36:34.311436  201454 cri.go:89] found id: "3d1bab9263ded9097697406f9949289734bffab4265f224193d85be8901fec23"
	I1029 09:36:34.311440  201454 cri.go:89] found id: "2431d11a99c398651365afb64f2024dc94310b6991d7f607080b149d3ed50e0d"
	I1029 09:36:34.311443  201454 cri.go:89] found id: "121378d1386ec391bb77c32c4dcdf7ab70266c9bb6c9219a6be7ff9d90b0f763"
	I1029 09:36:34.311446  201454 cri.go:89] found id: "f23cc204350b0e724d2d7de7e25812962bbbce24b9a5a7e022bc727f6a80b18c"
	I1029 09:36:34.311449  201454 cri.go:89] found id: "d0806a4c4d5e1ea92918f9224d777f2c3e94d25f663aaf70d5e9b0de3f5f3797"
	I1029 09:36:34.311458  201454 cri.go:89] found id: "dde28c2b4cc40af65ac06f06ec71c70d2e4934a002e393f3f98a4ea31fa0d591"
	I1029 09:36:34.311461  201454 cri.go:89] found id: "de2e4bbcf70fb4e2b145da2e1eeeb3965129da682b985d428d7db1b5c139f9ac"
	I1029 09:36:34.311471  201454 cri.go:89] found id: "0b803e843ffd62b30959b23b502469c6d39c63220ac980f6e8eb563b723110eb"
	I1029 09:36:34.311474  201454 cri.go:89] found id: "c16295e57713c6fa7f970f1681964ea937293af9f41f054bac638fc23c2a75e1"
	I1029 09:36:34.311477  201454 cri.go:89] found id: ""
	I1029 09:36:34.311535  201454 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:36:34.326689  201454 out.go:203] 
	W1029 09:36:34.329616  201454 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:36:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:36:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 09:36:34.329640  201454 out.go:285] * 
	* 
	W1029 09:36:34.335047  201454 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 09:36:34.338429  201454 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-505993 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-505993
helpers_test.go:243: (dbg) docker inspect no-preload-505993:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d63baf692038c6771a74322ac553fcd847fb56f516f845444c48dff9a54d272a",
	        "Created": "2025-10-29T09:33:49.110598267Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 196550,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:35:27.927885362Z",
	            "FinishedAt": "2025-10-29T09:35:27.076098991Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/d63baf692038c6771a74322ac553fcd847fb56f516f845444c48dff9a54d272a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d63baf692038c6771a74322ac553fcd847fb56f516f845444c48dff9a54d272a/hostname",
	        "HostsPath": "/var/lib/docker/containers/d63baf692038c6771a74322ac553fcd847fb56f516f845444c48dff9a54d272a/hosts",
	        "LogPath": "/var/lib/docker/containers/d63baf692038c6771a74322ac553fcd847fb56f516f845444c48dff9a54d272a/d63baf692038c6771a74322ac553fcd847fb56f516f845444c48dff9a54d272a-json.log",
	        "Name": "/no-preload-505993",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-505993:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-505993",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d63baf692038c6771a74322ac553fcd847fb56f516f845444c48dff9a54d272a",
	                "LowerDir": "/var/lib/docker/overlay2/b0823108135d7c7891d0d8e0e0ee4954f318020c6f85c95a7b1c176cc8aeeabf-init/diff:/var/lib/docker/overlay2/512c003c31c6c889d7180677d0a7d04f9641d651bb7c0f829b95ca7e47c2836c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b0823108135d7c7891d0d8e0e0ee4954f318020c6f85c95a7b1c176cc8aeeabf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b0823108135d7c7891d0d8e0e0ee4954f318020c6f85c95a7b1c176cc8aeeabf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b0823108135d7c7891d0d8e0e0ee4954f318020c6f85c95a7b1c176cc8aeeabf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-505993",
	                "Source": "/var/lib/docker/volumes/no-preload-505993/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-505993",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-505993",
	                "name.minikube.sigs.k8s.io": "no-preload-505993",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f744d638eb39c80f84a212cff9e20b45e7a58976f72797151872ca156b059803",
	            "SandboxKey": "/var/run/docker/netns/f744d638eb39",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-505993": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6a:d2:6b:3e:0d:8b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3147a87e4d57838736bbe9648b553b17f7ec6f1da903b525594523d0b3c2da78",
	                    "EndpointID": "b0446aa5d2a9437adf45d0df5e8ea54d780f00d02d9d6e9809b4b6e1cdebaced",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-505993",
	                        "d63baf692038"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-505993 -n no-preload-505993
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-505993 -n no-preload-505993: exit status 2 (367.880105ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-505993 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-505993 logs -n 25: (1.402324976s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-699236 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-699236    │ jenkins │ v1.37.0 │ 29 Oct 25 09:31 UTC │ 29 Oct 25 09:31 UTC │
	│ delete  │ -p cert-options-699236                                                                                                                                                                                                                        │ cert-options-699236    │ jenkins │ v1.37.0 │ 29 Oct 25 09:31 UTC │ 29 Oct 25 09:31 UTC │
	│ start   │ -p old-k8s-version-162751 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-162751 │ jenkins │ v1.37.0 │ 29 Oct 25 09:31 UTC │ 29 Oct 25 09:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-162751 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-162751 │ jenkins │ v1.37.0 │ 29 Oct 25 09:32 UTC │                     │
	│ stop    │ -p old-k8s-version-162751 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-162751 │ jenkins │ v1.37.0 │ 29 Oct 25 09:32 UTC │ 29 Oct 25 09:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-162751 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-162751 │ jenkins │ v1.37.0 │ 29 Oct 25 09:32 UTC │ 29 Oct 25 09:32 UTC │
	│ start   │ -p old-k8s-version-162751 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-162751 │ jenkins │ v1.37.0 │ 29 Oct 25 09:32 UTC │ 29 Oct 25 09:33 UTC │
	│ start   │ -p cert-expiration-690444 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-690444 │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ image   │ old-k8s-version-162751 image list --format=json                                                                                                                                                                                               │ old-k8s-version-162751 │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ pause   │ -p old-k8s-version-162751 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-162751 │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │                     │
	│ delete  │ -p old-k8s-version-162751                                                                                                                                                                                                                     │ old-k8s-version-162751 │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ delete  │ -p old-k8s-version-162751                                                                                                                                                                                                                     │ old-k8s-version-162751 │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ start   │ -p no-preload-505993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-505993      │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:35 UTC │
	│ delete  │ -p cert-expiration-690444                                                                                                                                                                                                                     │ cert-expiration-690444 │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ start   │ -p embed-certs-946178 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-946178     │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:35 UTC │
	│ addons  │ enable metrics-server -p no-preload-505993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-505993      │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │                     │
	│ stop    │ -p no-preload-505993 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-505993      │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:35 UTC │
	│ addons  │ enable dashboard -p no-preload-505993 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-505993      │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:35 UTC │
	│ start   │ -p no-preload-505993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-505993      │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-946178 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-946178     │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │                     │
	│ stop    │ -p embed-certs-946178 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-946178     │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-946178 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-946178     │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:35 UTC │
	│ start   │ -p embed-certs-946178 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-946178     │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │                     │
	│ image   │ no-preload-505993 image list --format=json                                                                                                                                                                                                    │ no-preload-505993      │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ pause   │ -p no-preload-505993 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-505993      │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:35:48
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:35:48.281778  199087 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:35:48.281977  199087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:35:48.282003  199087 out.go:374] Setting ErrFile to fd 2...
	I1029 09:35:48.282022  199087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:35:48.282303  199087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 09:35:48.282734  199087 out.go:368] Setting JSON to false
	I1029 09:35:48.283790  199087 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4700,"bootTime":1761725848,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1029 09:35:48.283892  199087 start.go:143] virtualization:  
	I1029 09:35:48.288973  199087 out.go:179] * [embed-certs-946178] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1029 09:35:48.293745  199087 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:35:48.293906  199087 notify.go:221] Checking for updates...
	I1029 09:35:48.302162  199087 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:35:48.305485  199087 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:35:48.308778  199087 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	I1029 09:35:48.311957  199087 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1029 09:35:48.315211  199087 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:35:48.318933  199087 config.go:182] Loaded profile config "embed-certs-946178": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:35:48.319489  199087 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:35:48.364168  199087 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1029 09:35:48.364274  199087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:35:48.452651  199087 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-29 09:35:48.439883026 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:35:48.452751  199087 docker.go:319] overlay module found
	I1029 09:35:48.456282  199087 out.go:179] * Using the docker driver based on existing profile
	I1029 09:35:48.459692  199087 start.go:309] selected driver: docker
	I1029 09:35:48.459710  199087 start.go:930] validating driver "docker" against &{Name:embed-certs-946178 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-946178 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:35:48.459827  199087 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:35:48.460579  199087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:35:48.543714  199087 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-29 09:35:48.532766046 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:35:48.544081  199087 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:35:48.544111  199087 cni.go:84] Creating CNI manager for ""
	I1029 09:35:48.544158  199087 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:35:48.544187  199087 start.go:353] cluster config:
	{Name:embed-certs-946178 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-946178 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:35:48.550402  199087 out.go:179] * Starting "embed-certs-946178" primary control-plane node in "embed-certs-946178" cluster
	I1029 09:35:48.554554  199087 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:35:48.557776  199087 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:35:48.560875  199087 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:35:48.560929  199087 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1029 09:35:48.560954  199087 cache.go:59] Caching tarball of preloaded images
	I1029 09:35:48.561040  199087 preload.go:233] Found /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1029 09:35:48.561049  199087 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 09:35:48.561164  199087 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/config.json ...
	I1029 09:35:48.561365  199087 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:35:48.586413  199087 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:35:48.586436  199087 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:35:48.586449  199087 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:35:48.586475  199087 start.go:360] acquireMachinesLock for embed-certs-946178: {Name:mk1c928a559dbc3bbce2e34d80593c51300c509b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:35:48.586533  199087 start.go:364] duration metric: took 36.595µs to acquireMachinesLock for "embed-certs-946178"
	I1029 09:35:48.586567  199087 start.go:96] Skipping create...Using existing machine configuration
	I1029 09:35:48.586572  199087 fix.go:54] fixHost starting: 
	I1029 09:35:48.586812  199087 cli_runner.go:164] Run: docker container inspect embed-certs-946178 --format={{.State.Status}}
	I1029 09:35:48.606032  199087 fix.go:112] recreateIfNeeded on embed-certs-946178: state=Stopped err=<nil>
	W1029 09:35:48.606059  199087 fix.go:138] unexpected machine state, will restart: <nil>
	W1029 09:35:48.644637  196421 pod_ready.go:104] pod "coredns-66bc5c9577-zpgms" is not "Ready", error: <nil>
	W1029 09:35:51.135806  196421 pod_ready.go:104] pod "coredns-66bc5c9577-zpgms" is not "Ready", error: <nil>
	I1029 09:35:48.612236  199087 out.go:252] * Restarting existing docker container for "embed-certs-946178" ...
	I1029 09:35:48.612373  199087 cli_runner.go:164] Run: docker start embed-certs-946178
	I1029 09:35:48.940132  199087 cli_runner.go:164] Run: docker container inspect embed-certs-946178 --format={{.State.Status}}
	I1029 09:35:48.968416  199087 kic.go:430] container "embed-certs-946178" state is running.
	I1029 09:35:48.968795  199087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-946178
	I1029 09:35:49.001021  199087 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/config.json ...
	I1029 09:35:49.001286  199087 machine.go:94] provisionDockerMachine start ...
	I1029 09:35:49.001362  199087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:35:49.030299  199087 main.go:143] libmachine: Using SSH client type: native
	I1029 09:35:49.030877  199087 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1029 09:35:49.030894  199087 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 09:35:49.031629  199087 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1029 09:35:52.192612  199087 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-946178
	
	I1029 09:35:52.192685  199087 ubuntu.go:182] provisioning hostname "embed-certs-946178"
	I1029 09:35:52.192778  199087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:35:52.219903  199087 main.go:143] libmachine: Using SSH client type: native
	I1029 09:35:52.220212  199087 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1029 09:35:52.220222  199087 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-946178 && echo "embed-certs-946178" | sudo tee /etc/hostname
	I1029 09:35:52.408282  199087 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-946178
	
	I1029 09:35:52.408389  199087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:35:52.434163  199087 main.go:143] libmachine: Using SSH client type: native
	I1029 09:35:52.434494  199087 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1029 09:35:52.434511  199087 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-946178' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-946178/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-946178' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 09:35:52.601505  199087 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 09:35:52.601561  199087 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-2763/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-2763/.minikube}
	I1029 09:35:52.601594  199087 ubuntu.go:190] setting up certificates
	I1029 09:35:52.601617  199087 provision.go:84] configureAuth start
	I1029 09:35:52.601706  199087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-946178
	I1029 09:35:52.626923  199087 provision.go:143] copyHostCerts
	I1029 09:35:52.627008  199087 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem, removing ...
	I1029 09:35:52.627031  199087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 09:35:52.627105  199087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem (1679 bytes)
	I1029 09:35:52.627216  199087 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem, removing ...
	I1029 09:35:52.627229  199087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 09:35:52.627260  199087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem (1082 bytes)
	I1029 09:35:52.627331  199087 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem, removing ...
	I1029 09:35:52.627341  199087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 09:35:52.627369  199087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem (1123 bytes)
	I1029 09:35:52.627537  199087 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem org=jenkins.embed-certs-946178 san=[127.0.0.1 192.168.85.2 embed-certs-946178 localhost minikube]
	I1029 09:35:53.811225  199087 provision.go:177] copyRemoteCerts
	I1029 09:35:53.811341  199087 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 09:35:53.811423  199087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:35:53.830404  199087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/embed-certs-946178/id_rsa Username:docker}
	I1029 09:35:53.948578  199087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 09:35:53.982501  199087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1029 09:35:54.005434  199087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1029 09:35:54.029846  199087 provision.go:87] duration metric: took 1.42820267s to configureAuth
	I1029 09:35:54.029926  199087 ubuntu.go:206] setting minikube options for container-runtime
	I1029 09:35:54.030178  199087 config.go:182] Loaded profile config "embed-certs-946178": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:35:54.030346  199087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:35:54.055802  199087 main.go:143] libmachine: Using SSH client type: native
	I1029 09:35:54.056105  199087 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1029 09:35:54.056120  199087 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 09:35:54.581632  199087 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 09:35:54.581657  199087 machine.go:97] duration metric: took 5.580359663s to provisionDockerMachine
	I1029 09:35:54.581667  199087 start.go:293] postStartSetup for "embed-certs-946178" (driver="docker")
	I1029 09:35:54.581678  199087 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 09:35:54.581738  199087 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 09:35:54.581786  199087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:35:54.612249  199087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/embed-certs-946178/id_rsa Username:docker}
	I1029 09:35:54.745834  199087 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 09:35:54.757194  199087 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 09:35:54.757225  199087 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 09:35:54.757236  199087 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/addons for local assets ...
	I1029 09:35:54.757287  199087 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/files for local assets ...
	I1029 09:35:54.757390  199087 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> 45502.pem in /etc/ssl/certs
	I1029 09:35:54.757503  199087 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 09:35:54.768962  199087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /etc/ssl/certs/45502.pem (1708 bytes)
	I1029 09:35:54.793298  199087 start.go:296] duration metric: took 211.615976ms for postStartSetup
	I1029 09:35:54.793376  199087 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 09:35:54.793421  199087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:35:54.813719  199087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/embed-certs-946178/id_rsa Username:docker}
	I1029 09:35:54.924786  199087 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 09:35:54.932522  199087 fix.go:56] duration metric: took 6.345943092s for fixHost
	I1029 09:35:54.932547  199087 start.go:83] releasing machines lock for "embed-certs-946178", held for 6.345990591s
	I1029 09:35:54.932614  199087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-946178
	I1029 09:35:54.973583  199087 ssh_runner.go:195] Run: cat /version.json
	I1029 09:35:54.973641  199087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:35:54.973849  199087 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 09:35:54.973913  199087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:35:55.019733  199087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/embed-certs-946178/id_rsa Username:docker}
	I1029 09:35:55.024988  199087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/embed-certs-946178/id_rsa Username:docker}
	I1029 09:35:55.144706  199087 ssh_runner.go:195] Run: systemctl --version
	I1029 09:35:55.254917  199087 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 09:35:55.330970  199087 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 09:35:55.336296  199087 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 09:35:55.336378  199087 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 09:35:55.344160  199087 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1029 09:35:55.344184  199087 start.go:496] detecting cgroup driver to use...
	I1029 09:35:55.344215  199087 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1029 09:35:55.344262  199087 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 09:35:55.365005  199087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 09:35:55.380728  199087 docker.go:218] disabling cri-docker service (if available) ...
	I1029 09:35:55.380843  199087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 09:35:55.398031  199087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 09:35:55.411982  199087 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 09:35:55.605459  199087 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 09:35:55.787029  199087 docker.go:234] disabling docker service ...
	I1029 09:35:55.787101  199087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 09:35:55.803082  199087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 09:35:55.825022  199087 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 09:35:55.973930  199087 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 09:35:56.157734  199087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 09:35:56.172992  199087 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 09:35:56.188517  199087 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 09:35:56.188590  199087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:35:56.198680  199087 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1029 09:35:56.198743  199087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:35:56.208424  199087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:35:56.218024  199087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:35:56.228580  199087 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 09:35:56.238298  199087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:35:56.251992  199087 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:35:56.265334  199087 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:35:56.276023  199087 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 09:35:56.285474  199087 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 09:35:56.294330  199087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:35:56.419802  199087 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 09:35:56.690951  199087 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 09:35:56.691045  199087 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 09:35:56.699947  199087 start.go:564] Will wait 60s for crictl version
	I1029 09:35:56.700062  199087 ssh_runner.go:195] Run: which crictl
	I1029 09:35:56.704151  199087 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 09:35:56.766117  199087 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 09:35:56.766216  199087 ssh_runner.go:195] Run: crio --version
	I1029 09:35:56.834470  199087 ssh_runner.go:195] Run: crio --version
	I1029 09:35:56.880606  199087 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1029 09:35:53.143547  196421 pod_ready.go:104] pod "coredns-66bc5c9577-zpgms" is not "Ready", error: <nil>
	W1029 09:35:55.645885  196421 pod_ready.go:104] pod "coredns-66bc5c9577-zpgms" is not "Ready", error: <nil>
	I1029 09:35:56.883623  199087 cli_runner.go:164] Run: docker network inspect embed-certs-946178 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:35:56.902479  199087 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1029 09:35:56.907902  199087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:35:56.929126  199087 kubeadm.go:884] updating cluster {Name:embed-certs-946178 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-946178 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 09:35:56.929298  199087 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:35:56.929381  199087 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:35:56.978907  199087 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:35:56.978928  199087 crio.go:433] Images already preloaded, skipping extraction
	I1029 09:35:56.978985  199087 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:35:57.023656  199087 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:35:57.023682  199087 cache_images.go:86] Images are preloaded, skipping loading
	I1029 09:35:57.023691  199087 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1029 09:35:57.023804  199087 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-946178 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-946178 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 09:35:57.023884  199087 ssh_runner.go:195] Run: crio config
	I1029 09:35:57.077428  199087 cni.go:84] Creating CNI manager for ""
	I1029 09:35:57.077494  199087 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:35:57.077528  199087 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1029 09:35:57.077555  199087 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-946178 NodeName:embed-certs-946178 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 09:35:57.077714  199087 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-946178"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 09:35:57.077787  199087 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 09:35:57.086234  199087 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 09:35:57.086355  199087 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 09:35:57.094218  199087 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1029 09:35:57.107464  199087 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 09:35:57.120972  199087 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1029 09:35:57.137259  199087 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1029 09:35:57.141362  199087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:35:57.151543  199087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:35:57.275401  199087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:35:57.291579  199087 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178 for IP: 192.168.85.2
	I1029 09:35:57.291662  199087 certs.go:195] generating shared ca certs ...
	I1029 09:35:57.291693  199087 certs.go:227] acquiring lock for ca certs: {Name:mk715611293338c39436fe9072c5ce0c2fb993a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:35:57.291882  199087 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key
	I1029 09:35:57.291952  199087 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key
	I1029 09:35:57.291988  199087 certs.go:257] generating profile certs ...
	I1029 09:35:57.292114  199087 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/client.key
	I1029 09:35:57.292220  199087 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/apiserver.key.8f5fae26
	I1029 09:35:57.292285  199087 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/proxy-client.key
	I1029 09:35:57.292459  199087 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem (1338 bytes)
	W1029 09:35:57.292520  199087 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550_empty.pem, impossibly tiny 0 bytes
	I1029 09:35:57.292538  199087 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem (1679 bytes)
	I1029 09:35:57.292579  199087 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem (1082 bytes)
	I1029 09:35:57.292612  199087 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem (1123 bytes)
	I1029 09:35:57.292652  199087 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem (1679 bytes)
	I1029 09:35:57.292701  199087 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem (1708 bytes)
	I1029 09:35:57.293248  199087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 09:35:57.315401  199087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 09:35:57.336596  199087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 09:35:57.357708  199087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1029 09:35:57.379108  199087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1029 09:35:57.405897  199087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1029 09:35:57.430451  199087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 09:35:57.452389  199087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1029 09:35:57.479026  199087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem --> /usr/share/ca-certificates/4550.pem (1338 bytes)
	I1029 09:35:57.508705  199087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /usr/share/ca-certificates/45502.pem (1708 bytes)
	I1029 09:35:57.532174  199087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 09:35:57.552388  199087 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 09:35:57.567511  199087 ssh_runner.go:195] Run: openssl version
	I1029 09:35:57.573992  199087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4550.pem && ln -fs /usr/share/ca-certificates/4550.pem /etc/ssl/certs/4550.pem"
	I1029 09:35:57.582449  199087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4550.pem
	I1029 09:35:57.586402  199087 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:27 /usr/share/ca-certificates/4550.pem
	I1029 09:35:57.586515  199087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4550.pem
	I1029 09:35:57.636129  199087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4550.pem /etc/ssl/certs/51391683.0"
	I1029 09:35:57.644583  199087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/45502.pem && ln -fs /usr/share/ca-certificates/45502.pem /etc/ssl/certs/45502.pem"
	I1029 09:35:57.653468  199087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/45502.pem
	I1029 09:35:57.658935  199087 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:27 /usr/share/ca-certificates/45502.pem
	I1029 09:35:57.659005  199087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/45502.pem
	I1029 09:35:57.700572  199087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/45502.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 09:35:57.708600  199087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 09:35:57.720410  199087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:35:57.724400  199087 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:35:57.724516  199087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:35:57.766355  199087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 09:35:57.774308  199087 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 09:35:57.778039  199087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1029 09:35:57.819869  199087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1029 09:35:57.862392  199087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1029 09:35:57.904144  199087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1029 09:35:57.951415  199087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1029 09:35:58.020064  199087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1029 09:35:58.107163  199087 kubeadm.go:401] StartCluster: {Name:embed-certs-946178 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-946178 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:35:58.107259  199087 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 09:35:58.107335  199087 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 09:35:58.182751  199087 cri.go:89] found id: "8fb3490c8a2c3fa9b6f908aac7af524a8a6b713d4b1306444595caf0ed320c15"
	I1029 09:35:58.182775  199087 cri.go:89] found id: "1eca250e7dd68ca1de609c5e6810695c68eaea3b51a86f93331e6d7205acad68"
	I1029 09:35:58.182781  199087 cri.go:89] found id: "0d84906ed693bbd1f66a0d46ac97dbb716c04201acaa1b9f85ffecdd60d49365"
	I1029 09:35:58.182785  199087 cri.go:89] found id: "9ba572ee5a49b071c9887b1b7536d698adcfa4c4fe872393a5200107f89ce91a"
	I1029 09:35:58.182797  199087 cri.go:89] found id: ""
	I1029 09:35:58.182848  199087 ssh_runner.go:195] Run: sudo runc list -f json
	W1029 09:35:58.229289  199087 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:35:58Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:35:58.229385  199087 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 09:35:58.250707  199087 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1029 09:35:58.250732  199087 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1029 09:35:58.250786  199087 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1029 09:35:58.263076  199087 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1029 09:35:58.263655  199087 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-946178" does not appear in /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:35:58.263899  199087 kubeconfig.go:62] /home/jenkins/minikube-integration/21800-2763/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-946178" cluster setting kubeconfig missing "embed-certs-946178" context setting]
	I1029 09:35:58.264427  199087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:35:58.265778  199087 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1029 09:35:58.282215  199087 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1029 09:35:58.282252  199087 kubeadm.go:602] duration metric: took 31.513344ms to restartPrimaryControlPlane
	I1029 09:35:58.282262  199087 kubeadm.go:403] duration metric: took 175.10849ms to StartCluster
	I1029 09:35:58.282277  199087 settings.go:142] acquiring lock: {Name:mkeba1c0d8e3656c2a05b6b6f1f81184498df216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:35:58.282356  199087 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:35:58.283658  199087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:35:58.285827  199087 config.go:182] Loaded profile config "embed-certs-946178": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:35:58.285897  199087 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:35:58.285943  199087 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 09:35:58.286001  199087 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-946178"
	I1029 09:35:58.286019  199087 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-946178"
	W1029 09:35:58.286032  199087 addons.go:248] addon storage-provisioner should already be in state true
	I1029 09:35:58.286052  199087 host.go:66] Checking if "embed-certs-946178" exists ...
	I1029 09:35:58.286520  199087 cli_runner.go:164] Run: docker container inspect embed-certs-946178 --format={{.State.Status}}
	I1029 09:35:58.287349  199087 addons.go:70] Setting dashboard=true in profile "embed-certs-946178"
	I1029 09:35:58.287376  199087 addons.go:239] Setting addon dashboard=true in "embed-certs-946178"
	W1029 09:35:58.287384  199087 addons.go:248] addon dashboard should already be in state true
	I1029 09:35:58.287417  199087 host.go:66] Checking if "embed-certs-946178" exists ...
	I1029 09:35:58.287868  199087 cli_runner.go:164] Run: docker container inspect embed-certs-946178 --format={{.State.Status}}
	I1029 09:35:58.290649  199087 addons.go:70] Setting default-storageclass=true in profile "embed-certs-946178"
	I1029 09:35:58.290689  199087 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-946178"
	I1029 09:35:58.291015  199087 cli_runner.go:164] Run: docker container inspect embed-certs-946178 --format={{.State.Status}}
	I1029 09:35:58.295704  199087 out.go:179] * Verifying Kubernetes components...
	I1029 09:35:58.302529  199087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:35:58.340300  199087 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1029 09:35:58.343788  199087 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1029 09:35:58.347803  199087 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1029 09:35:58.347831  199087 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1029 09:35:58.347909  199087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:35:58.353151  199087 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1029 09:35:58.137205  196421 pod_ready.go:104] pod "coredns-66bc5c9577-zpgms" is not "Ready", error: <nil>
	W1029 09:36:00.139889  196421 pod_ready.go:104] pod "coredns-66bc5c9577-zpgms" is not "Ready", error: <nil>
	W1029 09:36:02.636720  196421 pod_ready.go:104] pod "coredns-66bc5c9577-zpgms" is not "Ready", error: <nil>
	I1029 09:35:58.356158  199087 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:35:58.356182  199087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1029 09:35:58.356249  199087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:35:58.365481  199087 addons.go:239] Setting addon default-storageclass=true in "embed-certs-946178"
	W1029 09:35:58.365516  199087 addons.go:248] addon default-storageclass should already be in state true
	I1029 09:35:58.365541  199087 host.go:66] Checking if "embed-certs-946178" exists ...
	I1029 09:35:58.365964  199087 cli_runner.go:164] Run: docker container inspect embed-certs-946178 --format={{.State.Status}}
	I1029 09:35:58.409815  199087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/embed-certs-946178/id_rsa Username:docker}
	I1029 09:35:58.424508  199087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/embed-certs-946178/id_rsa Username:docker}
	I1029 09:35:58.426960  199087 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1029 09:35:58.426981  199087 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1029 09:35:58.427081  199087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:35:58.466156  199087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/embed-certs-946178/id_rsa Username:docker}
	I1029 09:35:58.662597  199087 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1029 09:35:58.662638  199087 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1029 09:35:58.685728  199087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:35:58.705440  199087 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1029 09:35:58.705467  199087 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1029 09:35:58.713533  199087 node_ready.go:35] waiting up to 6m0s for node "embed-certs-946178" to be "Ready" ...
	I1029 09:35:58.718161  199087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1029 09:35:58.761569  199087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:35:58.782548  199087 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1029 09:35:58.782575  199087 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1029 09:35:58.882167  199087 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1029 09:35:58.882194  199087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1029 09:35:58.942090  199087 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1029 09:35:58.942119  199087 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1029 09:35:58.966042  199087 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1029 09:35:58.966067  199087 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1029 09:35:58.984574  199087 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1029 09:35:58.984603  199087 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1029 09:35:59.013785  199087 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1029 09:35:59.013825  199087 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1029 09:35:59.045809  199087 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1029 09:35:59.045846  199087 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1029 09:35:59.074143  199087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1029 09:36:03.120944  199087 node_ready.go:49] node "embed-certs-946178" is "Ready"
	I1029 09:36:03.121028  199087 node_ready.go:38] duration metric: took 4.407436324s for node "embed-certs-946178" to be "Ready" ...
	I1029 09:36:03.121056  199087 api_server.go:52] waiting for apiserver process to appear ...
	I1029 09:36:03.121144  199087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:36:03.347717  199087 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.62951775s)
	I1029 09:36:04.795395  199087 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.033777088s)
	I1029 09:36:04.795611  199087 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.721434564s)
	I1029 09:36:04.795642  199087 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.674462794s)
	I1029 09:36:04.795838  199087 api_server.go:72] duration metric: took 6.509913302s to wait for apiserver process to appear ...
	I1029 09:36:04.795852  199087 api_server.go:88] waiting for apiserver healthz status ...
	I1029 09:36:04.795874  199087 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:36:04.799253  199087 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-946178 addons enable metrics-server
	
	I1029 09:36:04.802655  199087 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	W1029 09:36:04.637124  196421 pod_ready.go:104] pod "coredns-66bc5c9577-zpgms" is not "Ready", error: <nil>
	W1029 09:36:07.135687  196421 pod_ready.go:104] pod "coredns-66bc5c9577-zpgms" is not "Ready", error: <nil>
	I1029 09:36:04.805758  199087 addons.go:515] duration metric: took 6.519793037s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1029 09:36:04.814571  199087 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1029 09:36:04.816043  199087 api_server.go:141] control plane version: v1.34.1
	I1029 09:36:04.816094  199087 api_server.go:131] duration metric: took 20.234303ms to wait for apiserver health ...
	I1029 09:36:04.816117  199087 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 09:36:04.821164  199087 system_pods.go:59] 8 kube-system pods found
	I1029 09:36:04.821242  199087 system_pods.go:61] "coredns-66bc5c9577-fszff" [20eec5cd-ff72-435d-8bf3-186261f7029b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:36:04.821270  199087 system_pods.go:61] "etcd-embed-certs-946178" [0d9dac68-e3a7-4602-b820-9b5f6d8a637c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:36:04.821309  199087 system_pods.go:61] "kindnet-8lf6r" [67b8d2ab-954a-4f88-9ef0-fd96b500d79d] Running
	I1029 09:36:04.821336  199087 system_pods.go:61] "kube-apiserver-embed-certs-946178" [e774ef45-3d15-4691-aeda-044539edf25c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:36:04.821361  199087 system_pods.go:61] "kube-controller-manager-embed-certs-946178" [a7cbe94f-cfdb-421f-a335-7e796ce50d35] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:36:04.821389  199087 system_pods.go:61] "kube-proxy-8zwf2" [3571c7e9-109e-43ac-8a13-fbf0f2c0b2f2] Running
	I1029 09:36:04.821420  199087 system_pods.go:61] "kube-scheduler-embed-certs-946178" [406787cb-5f66-4c15-9938-0f4ed33dab0e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:36:04.821447  199087 system_pods.go:61] "storage-provisioner" [b2401761-29ab-456b-9542-f90d10c5c3dd] Running
	I1029 09:36:04.821480  199087 system_pods.go:74] duration metric: took 5.343957ms to wait for pod list to return data ...
	I1029 09:36:04.821503  199087 default_sa.go:34] waiting for default service account to be created ...
	I1029 09:36:04.824124  199087 default_sa.go:45] found service account: "default"
	I1029 09:36:04.824188  199087 default_sa.go:55] duration metric: took 2.655211ms for default service account to be created ...
	I1029 09:36:04.824213  199087 system_pods.go:116] waiting for k8s-apps to be running ...
	I1029 09:36:04.921125  199087 system_pods.go:86] 8 kube-system pods found
	I1029 09:36:04.921209  199087 system_pods.go:89] "coredns-66bc5c9577-fszff" [20eec5cd-ff72-435d-8bf3-186261f7029b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:36:04.921237  199087 system_pods.go:89] "etcd-embed-certs-946178" [0d9dac68-e3a7-4602-b820-9b5f6d8a637c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:36:04.921280  199087 system_pods.go:89] "kindnet-8lf6r" [67b8d2ab-954a-4f88-9ef0-fd96b500d79d] Running
	I1029 09:36:04.921311  199087 system_pods.go:89] "kube-apiserver-embed-certs-946178" [e774ef45-3d15-4691-aeda-044539edf25c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:36:04.921336  199087 system_pods.go:89] "kube-controller-manager-embed-certs-946178" [a7cbe94f-cfdb-421f-a335-7e796ce50d35] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:36:04.921371  199087 system_pods.go:89] "kube-proxy-8zwf2" [3571c7e9-109e-43ac-8a13-fbf0f2c0b2f2] Running
	I1029 09:36:04.921398  199087 system_pods.go:89] "kube-scheduler-embed-certs-946178" [406787cb-5f66-4c15-9938-0f4ed33dab0e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:36:04.921421  199087 system_pods.go:89] "storage-provisioner" [b2401761-29ab-456b-9542-f90d10c5c3dd] Running
	I1029 09:36:04.921458  199087 system_pods.go:126] duration metric: took 97.226578ms to wait for k8s-apps to be running ...
	I1029 09:36:04.921485  199087 system_svc.go:44] waiting for kubelet service to be running ....
	I1029 09:36:04.921571  199087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:36:04.935336  199087 system_svc.go:56] duration metric: took 13.830883ms WaitForService to wait for kubelet
	I1029 09:36:04.935422  199087 kubeadm.go:587] duration metric: took 6.649496346s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:36:04.935457  199087 node_conditions.go:102] verifying NodePressure condition ...
	I1029 09:36:04.939101  199087 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1029 09:36:04.939180  199087 node_conditions.go:123] node cpu capacity is 2
	I1029 09:36:04.939224  199087 node_conditions.go:105] duration metric: took 3.732447ms to run NodePressure ...
	I1029 09:36:04.939253  199087 start.go:242] waiting for startup goroutines ...
	I1029 09:36:04.939288  199087 start.go:247] waiting for cluster config update ...
	I1029 09:36:04.939319  199087 start.go:256] writing updated cluster config ...
	I1029 09:36:04.939682  199087 ssh_runner.go:195] Run: rm -f paused
	I1029 09:36:04.943594  199087 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:36:04.947923  199087 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fszff" in "kube-system" namespace to be "Ready" or be gone ...
	W1029 09:36:06.953672  199087 pod_ready.go:104] pod "coredns-66bc5c9577-fszff" is not "Ready", error: <nil>
	W1029 09:36:09.136245  196421 pod_ready.go:104] pod "coredns-66bc5c9577-zpgms" is not "Ready", error: <nil>
	W1029 09:36:11.635803  196421 pod_ready.go:104] pod "coredns-66bc5c9577-zpgms" is not "Ready", error: <nil>
	W1029 09:36:09.457602  199087 pod_ready.go:104] pod "coredns-66bc5c9577-fszff" is not "Ready", error: <nil>
	W1029 09:36:11.953306  199087 pod_ready.go:104] pod "coredns-66bc5c9577-fszff" is not "Ready", error: <nil>
	W1029 09:36:13.640898  196421 pod_ready.go:104] pod "coredns-66bc5c9577-zpgms" is not "Ready", error: <nil>
	W1029 09:36:16.136675  196421 pod_ready.go:104] pod "coredns-66bc5c9577-zpgms" is not "Ready", error: <nil>
	W1029 09:36:13.954106  199087 pod_ready.go:104] pod "coredns-66bc5c9577-fszff" is not "Ready", error: <nil>
	W1029 09:36:16.454112  199087 pod_ready.go:104] pod "coredns-66bc5c9577-fszff" is not "Ready", error: <nil>
	W1029 09:36:18.136792  196421 pod_ready.go:104] pod "coredns-66bc5c9577-zpgms" is not "Ready", error: <nil>
	I1029 09:36:19.135297  196421 pod_ready.go:94] pod "coredns-66bc5c9577-zpgms" is "Ready"
	I1029 09:36:19.135385  196421 pod_ready.go:86] duration metric: took 36.504985007s for pod "coredns-66bc5c9577-zpgms" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:36:19.138203  196421 pod_ready.go:83] waiting for pod "etcd-no-preload-505993" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:36:19.142659  196421 pod_ready.go:94] pod "etcd-no-preload-505993" is "Ready"
	I1029 09:36:19.142688  196421 pod_ready.go:86] duration metric: took 4.454636ms for pod "etcd-no-preload-505993" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:36:19.145156  196421 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-505993" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:36:19.149702  196421 pod_ready.go:94] pod "kube-apiserver-no-preload-505993" is "Ready"
	I1029 09:36:19.149729  196421 pod_ready.go:86] duration metric: took 4.510932ms for pod "kube-apiserver-no-preload-505993" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:36:19.151811  196421 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-505993" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:36:19.334222  196421 pod_ready.go:94] pod "kube-controller-manager-no-preload-505993" is "Ready"
	I1029 09:36:19.334251  196421 pod_ready.go:86] duration metric: took 182.384156ms for pod "kube-controller-manager-no-preload-505993" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:36:19.533498  196421 pod_ready.go:83] waiting for pod "kube-proxy-r6974" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:36:19.933475  196421 pod_ready.go:94] pod "kube-proxy-r6974" is "Ready"
	I1029 09:36:19.933503  196421 pod_ready.go:86] duration metric: took 399.977082ms for pod "kube-proxy-r6974" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:36:20.133492  196421 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-505993" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:36:20.534232  196421 pod_ready.go:94] pod "kube-scheduler-no-preload-505993" is "Ready"
	I1029 09:36:20.534263  196421 pod_ready.go:86] duration metric: took 400.741183ms for pod "kube-scheduler-no-preload-505993" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:36:20.534277  196421 pod_ready.go:40] duration metric: took 37.962294444s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:36:20.587461  196421 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1029 09:36:20.591367  196421 out.go:179] * Done! kubectl is now configured to use "no-preload-505993" cluster and "default" namespace by default
	W1029 09:36:18.455150  199087 pod_ready.go:104] pod "coredns-66bc5c9577-fszff" is not "Ready", error: <nil>
	W1029 09:36:20.954006  199087 pod_ready.go:104] pod "coredns-66bc5c9577-fszff" is not "Ready", error: <nil>
	W1029 09:36:22.954461  199087 pod_ready.go:104] pod "coredns-66bc5c9577-fszff" is not "Ready", error: <nil>
	W1029 09:36:25.453318  199087 pod_ready.go:104] pod "coredns-66bc5c9577-fszff" is not "Ready", error: <nil>
	W1029 09:36:27.456759  199087 pod_ready.go:104] pod "coredns-66bc5c9577-fszff" is not "Ready", error: <nil>
	W1029 09:36:29.954045  199087 pod_ready.go:104] pod "coredns-66bc5c9577-fszff" is not "Ready", error: <nil>
	W1029 09:36:32.455787  199087 pod_ready.go:104] pod "coredns-66bc5c9577-fszff" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 29 09:36:12 no-preload-505993 crio[649]: time="2025-10-29T09:36:12.855494728Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5e387a0c-3ef4-44b9-9cf3-a30a3e2bf424 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:36:12 no-preload-505993 crio[649]: time="2025-10-29T09:36:12.858022799Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=50fdb341-b88e-47b2-aa04-497efadcf7de name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:36:12 no-preload-505993 crio[649]: time="2025-10-29T09:36:12.858127662Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:36:12 no-preload-505993 crio[649]: time="2025-10-29T09:36:12.874809105Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:36:12 no-preload-505993 crio[649]: time="2025-10-29T09:36:12.875255173Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/dcc5bf377091740883291190dc19162888d515d3c4b382e1b14e2e1c25b4ca2e/merged/etc/passwd: no such file or directory"
	Oct 29 09:36:12 no-preload-505993 crio[649]: time="2025-10-29T09:36:12.875431609Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/dcc5bf377091740883291190dc19162888d515d3c4b382e1b14e2e1c25b4ca2e/merged/etc/group: no such file or directory"
	Oct 29 09:36:12 no-preload-505993 crio[649]: time="2025-10-29T09:36:12.877097954Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:36:12 no-preload-505993 crio[649]: time="2025-10-29T09:36:12.915898082Z" level=info msg="Created container 6c858fe9343d518a3734dba79545bb4f9be5a1caad65608525b2cfbdc1cb354e: kube-system/storage-provisioner/storage-provisioner" id=50fdb341-b88e-47b2-aa04-497efadcf7de name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:36:12 no-preload-505993 crio[649]: time="2025-10-29T09:36:12.917290095Z" level=info msg="Starting container: 6c858fe9343d518a3734dba79545bb4f9be5a1caad65608525b2cfbdc1cb354e" id=ff59f343-988c-4e11-98e4-5bb90c431342 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:36:12 no-preload-505993 crio[649]: time="2025-10-29T09:36:12.923858202Z" level=info msg="Started container" PID=1648 containerID=6c858fe9343d518a3734dba79545bb4f9be5a1caad65608525b2cfbdc1cb354e description=kube-system/storage-provisioner/storage-provisioner id=ff59f343-988c-4e11-98e4-5bb90c431342 name=/runtime.v1.RuntimeService/StartContainer sandboxID=09d87b325636520084586d3b547e5bd967104258c89fac6114e699ad20c7b6d6
	Oct 29 09:36:22 no-preload-505993 crio[649]: time="2025-10-29T09:36:22.256829168Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:36:22 no-preload-505993 crio[649]: time="2025-10-29T09:36:22.26120556Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:36:22 no-preload-505993 crio[649]: time="2025-10-29T09:36:22.261365471Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:36:22 no-preload-505993 crio[649]: time="2025-10-29T09:36:22.261398497Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:36:22 no-preload-505993 crio[649]: time="2025-10-29T09:36:22.264808997Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:36:22 no-preload-505993 crio[649]: time="2025-10-29T09:36:22.264845945Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:36:22 no-preload-505993 crio[649]: time="2025-10-29T09:36:22.264875967Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:36:22 no-preload-505993 crio[649]: time="2025-10-29T09:36:22.268743087Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:36:22 no-preload-505993 crio[649]: time="2025-10-29T09:36:22.268794066Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:36:22 no-preload-505993 crio[649]: time="2025-10-29T09:36:22.268816663Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:36:22 no-preload-505993 crio[649]: time="2025-10-29T09:36:22.271884817Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:36:22 no-preload-505993 crio[649]: time="2025-10-29T09:36:22.271918409Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:36:22 no-preload-505993 crio[649]: time="2025-10-29T09:36:22.27194131Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:36:22 no-preload-505993 crio[649]: time="2025-10-29T09:36:22.28049908Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:36:22 no-preload-505993 crio[649]: time="2025-10-29T09:36:22.280624693Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	6c858fe9343d5       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           22 seconds ago      Running             storage-provisioner         2                   09d87b3256365       storage-provisioner                          kube-system
	0b803e843ffd6       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           25 seconds ago      Exited              dashboard-metrics-scraper   2                   5d3b41bce9d55       dashboard-metrics-scraper-6ffb444bf9-grzt8   kubernetes-dashboard
	c16295e57713c       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   38 seconds ago      Running             kubernetes-dashboard        0                   9ab79a9bb5be3       kubernetes-dashboard-855c9754f9-985l5        kubernetes-dashboard
	5cbe8ea853be2       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago      Running             busybox                     1                   e9811abb55348       busybox                                      default
	fb26471a716c9       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           53 seconds ago      Running             coredns                     1                   5176d832bb8a9       coredns-66bc5c9577-zpgms                     kube-system
	3d1bab9263ded       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago      Running             kindnet-cni                 1                   cc67ad8e252bf       kindnet-9z7ks                                kube-system
	2431d11a99c39       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           53 seconds ago      Running             kube-proxy                  1                   d307942825f7b       kube-proxy-r6974                             kube-system
	121378d1386ec       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           53 seconds ago      Exited              storage-provisioner         1                   09d87b3256365       storage-provisioner                          kube-system
	f23cc204350b0       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           59 seconds ago      Running             kube-controller-manager     1                   b6f6e07d6e847       kube-controller-manager-no-preload-505993    kube-system
	d0806a4c4d5e1       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           59 seconds ago      Running             kube-apiserver              1                   b372655fcba27       kube-apiserver-no-preload-505993             kube-system
	dde28c2b4cc40       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           59 seconds ago      Running             etcd                        1                   7aabf0a2c3ae8       etcd-no-preload-505993                       kube-system
	de2e4bbcf70fb       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           59 seconds ago      Running             kube-scheduler              1                   33b0528c46dd8       kube-scheduler-no-preload-505993             kube-system
	
	
	==> coredns [fb26471a716c99b053f80007a29cfb0be111d8091a8005d0e65f204374cad040] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36159 - 36356 "HINFO IN 3767339234654625109.1964683783820827812. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.077778462s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-505993
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-505993
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=no-preload-505993
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_34_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:34:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-505993
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:36:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:36:32 +0000   Wed, 29 Oct 2025 09:34:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:36:32 +0000   Wed, 29 Oct 2025 09:34:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:36:32 +0000   Wed, 29 Oct 2025 09:34:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 09:36:32 +0000   Wed, 29 Oct 2025 09:35:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-505993
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                389ba72a-ee76-4894-8bbe-d133735524b8
	  Boot ID:                    dcb5a4aa-84b0-4edf-aeb7-a96ccf6c0882
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-zpgms                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     109s
	  kube-system                 etcd-no-preload-505993                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         114s
	  kube-system                 kindnet-9z7ks                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-no-preload-505993              250m (12%)    0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-no-preload-505993     200m (10%)    0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-r6974                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-no-preload-505993              100m (5%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-grzt8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-985l5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 106s                 kube-proxy       
	  Normal   Starting                 53s                  kube-proxy       
	  Normal   Starting                 2m6s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m6s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m6s (x8 over 2m6s)  kubelet          Node no-preload-505993 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m6s (x8 over 2m6s)  kubelet          Node no-preload-505993 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m6s (x8 over 2m6s)  kubelet          Node no-preload-505993 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    114s                 kubelet          Node no-preload-505993 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 114s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  114s                 kubelet          Node no-preload-505993 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     114s                 kubelet          Node no-preload-505993 status is now: NodeHasSufficientPID
	  Normal   Starting                 114s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           110s                 node-controller  Node no-preload-505993 event: Registered Node no-preload-505993 in Controller
	  Normal   NodeReady                94s                  kubelet          Node no-preload-505993 status is now: NodeReady
	  Normal   Starting                 60s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s (x8 over 60s)    kubelet          Node no-preload-505993 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x8 over 60s)    kubelet          Node no-preload-505993 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s (x8 over 60s)    kubelet          Node no-preload-505993 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           51s                  node-controller  Node no-preload-505993 event: Registered Node no-preload-505993 in Controller
	
	
	==> dmesg <==
	[Oct29 09:08] overlayfs: idmapped layers are currently not supported
	[Oct29 09:10] overlayfs: idmapped layers are currently not supported
	[ +24.018500] overlayfs: idmapped layers are currently not supported
	[  +4.070732] overlayfs: idmapped layers are currently not supported
	[Oct29 09:11] overlayfs: idmapped layers are currently not supported
	[ +18.424492] overlayfs: idmapped layers are currently not supported
	[  +4.342269] hrtimer: interrupt took 2289025 ns
	[Oct29 09:12] overlayfs: idmapped layers are currently not supported
	[Oct29 09:13] overlayfs: idmapped layers are currently not supported
	[Oct29 09:14] overlayfs: idmapped layers are currently not supported
	[Oct29 09:20] overlayfs: idmapped layers are currently not supported
	[Oct29 09:23] overlayfs: idmapped layers are currently not supported
	[Oct29 09:24] overlayfs: idmapped layers are currently not supported
	[ +30.917844] overlayfs: idmapped layers are currently not supported
	[Oct29 09:27] overlayfs: idmapped layers are currently not supported
	[Oct29 09:29] overlayfs: idmapped layers are currently not supported
	[Oct29 09:30] overlayfs: idmapped layers are currently not supported
	[  +5.608805] overlayfs: idmapped layers are currently not supported
	[ +37.422429] overlayfs: idmapped layers are currently not supported
	[Oct29 09:31] overlayfs: idmapped layers are currently not supported
	[Oct29 09:32] overlayfs: idmapped layers are currently not supported
	[Oct29 09:34] overlayfs: idmapped layers are currently not supported
	[ +22.728709] overlayfs: idmapped layers are currently not supported
	[Oct29 09:35] overlayfs: idmapped layers are currently not supported
	[ +21.902387] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [dde28c2b4cc40af65ac06f06ec71c70d2e4934a002e393f3f98a4ea31fa0d591] <==
	{"level":"warn","ts":"2025-10-29T09:35:39.645412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:39.696046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:39.721043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:39.752869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:39.788240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:39.820614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:39.848543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:39.864564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:39.891618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:39.940914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:39.963592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:39.977755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:40.006899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:40.017315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:40.033453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:40.051205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:40.065115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:40.085381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:40.098933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:40.149136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:40.164667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:40.189462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:40.206661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:40.222067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:40.274867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60594","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:36:35 up  1:19,  0 user,  load average: 4.06, 3.80, 2.90
	Linux no-preload-505993 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3d1bab9263ded9097697406f9949289734bffab4265f224193d85be8901fec23] <==
	I1029 09:35:42.061847       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:35:42.064473       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1029 09:35:42.064715       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:35:42.064761       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:35:42.064808       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:35:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:35:42.255051       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:35:42.255154       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:35:42.255191       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:35:42.256134       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1029 09:36:12.255358       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1029 09:36:12.255668       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1029 09:36:12.256936       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1029 09:36:12.257212       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1029 09:36:13.255548       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 09:36:13.255584       1 metrics.go:72] Registering metrics
	I1029 09:36:13.255653       1 controller.go:711] "Syncing nftables rules"
	I1029 09:36:22.255864       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1029 09:36:22.256542       1 main.go:301] handling current node
	I1029 09:36:32.263682       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1029 09:36:32.263714       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d0806a4c4d5e1ea92918f9224d777f2c3e94d25f663aaf70d5e9b0de3f5f3797] <==
	I1029 09:35:41.272836       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1029 09:35:41.272877       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1029 09:35:41.294695       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1029 09:35:41.299123       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1029 09:35:41.299247       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1029 09:35:41.299282       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1029 09:35:41.299294       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1029 09:35:41.299553       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1029 09:35:41.299591       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1029 09:35:41.300608       1 aggregator.go:171] initial CRD sync complete...
	I1029 09:35:41.300635       1 autoregister_controller.go:144] Starting autoregister controller
	I1029 09:35:41.300642       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1029 09:35:41.300650       1 cache.go:39] Caches are synced for autoregister controller
	E1029 09:35:41.352259       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1029 09:35:41.537784       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 09:35:41.879626       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:35:41.895064       1 controller.go:667] quota admission added evaluator for: namespaces
	I1029 09:35:42.070561       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1029 09:35:42.135408       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:35:42.162323       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 09:35:42.297110       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.187.234"}
	I1029 09:35:42.321333       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.111.179"}
	I1029 09:35:44.897527       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1029 09:35:44.946250       1 controller.go:667] quota admission added evaluator for: endpoints
	I1029 09:35:45.066265       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [f23cc204350b0e724d2d7de7e25812962bbbce24b9a5a7e022bc727f6a80b18c] <==
	I1029 09:35:44.540779       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1029 09:35:44.540791       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1029 09:35:44.542977       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1029 09:35:44.548974       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1029 09:35:44.554624       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:35:44.559869       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:35:44.562002       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1029 09:35:44.565243       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1029 09:35:44.573491       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:35:44.586867       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1029 09:35:44.586937       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1029 09:35:44.586970       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1029 09:35:44.586975       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1029 09:35:44.586980       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1029 09:35:44.589993       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1029 09:35:44.590056       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1029 09:35:44.590101       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1029 09:35:44.590125       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1029 09:35:44.590170       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:35:44.590181       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1029 09:35:44.590188       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1029 09:35:44.590253       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1029 09:35:44.590767       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1029 09:35:44.590867       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1029 09:35:44.593752       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	
	
	==> kube-proxy [2431d11a99c398651365afb64f2024dc94310b6991d7f607080b149d3ed50e0d] <==
	I1029 09:35:42.283466       1 server_linux.go:53] "Using iptables proxy"
	I1029 09:35:42.460817       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 09:35:42.562177       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 09:35:42.565422       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1029 09:35:42.565546       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 09:35:42.591831       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:35:42.591881       1 server_linux.go:132] "Using iptables Proxier"
	I1029 09:35:42.596151       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 09:35:42.596741       1 server.go:527] "Version info" version="v1.34.1"
	I1029 09:35:42.596760       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:35:42.597861       1 config.go:200] "Starting service config controller"
	I1029 09:35:42.597880       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 09:35:42.602034       1 config.go:106] "Starting endpoint slice config controller"
	I1029 09:35:42.602056       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 09:35:42.602075       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 09:35:42.602082       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 09:35:42.602483       1 config.go:309] "Starting node config controller"
	I1029 09:35:42.602500       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 09:35:42.602507       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 09:35:42.698252       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 09:35:42.702577       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 09:35:42.702594       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [de2e4bbcf70fb4e2b145da2e1eeeb3965129da682b985d428d7db1b5c139f9ac] <==
	I1029 09:35:39.289694       1 serving.go:386] Generated self-signed cert in-memory
	W1029 09:35:41.032553       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1029 09:35:41.032582       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1029 09:35:41.032592       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1029 09:35:41.032599       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1029 09:35:41.275745       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1029 09:35:41.275776       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:35:41.283560       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1029 09:35:41.284508       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:35:41.292819       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:35:41.284529       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1029 09:35:41.395387       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 09:35:45 no-preload-505993 kubelet[769]: I1029 09:35:45.249086     769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxbg6\" (UniqueName: \"kubernetes.io/projected/af3605e2-60e5-49fb-9b85-109d52e037a5-kube-api-access-bxbg6\") pod \"kubernetes-dashboard-855c9754f9-985l5\" (UID: \"af3605e2-60e5-49fb-9b85-109d52e037a5\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-985l5"
	Oct 29 09:35:45 no-preload-505993 kubelet[769]: W1029 09:35:45.466822     769 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d63baf692038c6771a74322ac553fcd847fb56f516f845444c48dff9a54d272a/crio-5d3b41bce9d55629374e1a7f8875c385ea95ae964339ce04467cae01aede085b WatchSource:0}: Error finding container 5d3b41bce9d55629374e1a7f8875c385ea95ae964339ce04467cae01aede085b: Status 404 returned error can't find the container with id 5d3b41bce9d55629374e1a7f8875c385ea95ae964339ce04467cae01aede085b
	Oct 29 09:35:45 no-preload-505993 kubelet[769]: W1029 09:35:45.471481     769 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d63baf692038c6771a74322ac553fcd847fb56f516f845444c48dff9a54d272a/crio-9ab79a9bb5be3c15418d1287920ab13e4eb3e1a89cf50289029f058aff23f292 WatchSource:0}: Error finding container 9ab79a9bb5be3c15418d1287920ab13e4eb3e1a89cf50289029f058aff23f292: Status 404 returned error can't find the container with id 9ab79a9bb5be3c15418d1287920ab13e4eb3e1a89cf50289029f058aff23f292
	Oct 29 09:35:48 no-preload-505993 kubelet[769]: I1029 09:35:48.929989     769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 29 09:35:50 no-preload-505993 kubelet[769]: I1029 09:35:50.757768     769 scope.go:117] "RemoveContainer" containerID="d48d75c152158fbcddb817be10dbf9a9d065fd9b291a4965cf18e2ca9c565796"
	Oct 29 09:35:51 no-preload-505993 kubelet[769]: I1029 09:35:51.761622     769 scope.go:117] "RemoveContainer" containerID="d48d75c152158fbcddb817be10dbf9a9d065fd9b291a4965cf18e2ca9c565796"
	Oct 29 09:35:51 no-preload-505993 kubelet[769]: I1029 09:35:51.761900     769 scope.go:117] "RemoveContainer" containerID="b164421cd67b93345e81ad30f3feb6e24d190ed77aff8ef6ed944caa8a28b747"
	Oct 29 09:35:51 no-preload-505993 kubelet[769]: E1029 09:35:51.762034     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-grzt8_kubernetes-dashboard(dd216e78-4d58-4289-97f3-d4160569b000)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-grzt8" podUID="dd216e78-4d58-4289-97f3-d4160569b000"
	Oct 29 09:35:52 no-preload-505993 kubelet[769]: I1029 09:35:52.784753     769 scope.go:117] "RemoveContainer" containerID="b164421cd67b93345e81ad30f3feb6e24d190ed77aff8ef6ed944caa8a28b747"
	Oct 29 09:35:52 no-preload-505993 kubelet[769]: E1029 09:35:52.797890     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-grzt8_kubernetes-dashboard(dd216e78-4d58-4289-97f3-d4160569b000)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-grzt8" podUID="dd216e78-4d58-4289-97f3-d4160569b000"
	Oct 29 09:35:55 no-preload-505993 kubelet[769]: I1029 09:35:55.437585     769 scope.go:117] "RemoveContainer" containerID="b164421cd67b93345e81ad30f3feb6e24d190ed77aff8ef6ed944caa8a28b747"
	Oct 29 09:35:55 no-preload-505993 kubelet[769]: E1029 09:35:55.437756     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-grzt8_kubernetes-dashboard(dd216e78-4d58-4289-97f3-d4160569b000)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-grzt8" podUID="dd216e78-4d58-4289-97f3-d4160569b000"
	Oct 29 09:36:09 no-preload-505993 kubelet[769]: I1029 09:36:09.608912     769 scope.go:117] "RemoveContainer" containerID="b164421cd67b93345e81ad30f3feb6e24d190ed77aff8ef6ed944caa8a28b747"
	Oct 29 09:36:09 no-preload-505993 kubelet[769]: I1029 09:36:09.835937     769 scope.go:117] "RemoveContainer" containerID="b164421cd67b93345e81ad30f3feb6e24d190ed77aff8ef6ed944caa8a28b747"
	Oct 29 09:36:09 no-preload-505993 kubelet[769]: I1029 09:36:09.836264     769 scope.go:117] "RemoveContainer" containerID="0b803e843ffd62b30959b23b502469c6d39c63220ac980f6e8eb563b723110eb"
	Oct 29 09:36:09 no-preload-505993 kubelet[769]: E1029 09:36:09.836443     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-grzt8_kubernetes-dashboard(dd216e78-4d58-4289-97f3-d4160569b000)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-grzt8" podUID="dd216e78-4d58-4289-97f3-d4160569b000"
	Oct 29 09:36:09 no-preload-505993 kubelet[769]: I1029 09:36:09.866295     769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-985l5" podStartSLOduration=13.894118854 podStartE2EDuration="24.866278043s" podCreationTimestamp="2025-10-29 09:35:45 +0000 UTC" firstStartedPulling="2025-10-29 09:35:45.477659888 +0000 UTC m=+10.224318348" lastFinishedPulling="2025-10-29 09:35:56.449819077 +0000 UTC m=+21.196477537" observedRunningTime="2025-10-29 09:35:56.825510364 +0000 UTC m=+21.572168840" watchObservedRunningTime="2025-10-29 09:36:09.866278043 +0000 UTC m=+34.612936520"
	Oct 29 09:36:12 no-preload-505993 kubelet[769]: I1029 09:36:12.847233     769 scope.go:117] "RemoveContainer" containerID="121378d1386ec391bb77c32c4dcdf7ab70266c9bb6c9219a6be7ff9d90b0f763"
	Oct 29 09:36:15 no-preload-505993 kubelet[769]: I1029 09:36:15.437394     769 scope.go:117] "RemoveContainer" containerID="0b803e843ffd62b30959b23b502469c6d39c63220ac980f6e8eb563b723110eb"
	Oct 29 09:36:15 no-preload-505993 kubelet[769]: E1029 09:36:15.438239     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-grzt8_kubernetes-dashboard(dd216e78-4d58-4289-97f3-d4160569b000)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-grzt8" podUID="dd216e78-4d58-4289-97f3-d4160569b000"
	Oct 29 09:36:26 no-preload-505993 kubelet[769]: I1029 09:36:26.609093     769 scope.go:117] "RemoveContainer" containerID="0b803e843ffd62b30959b23b502469c6d39c63220ac980f6e8eb563b723110eb"
	Oct 29 09:36:26 no-preload-505993 kubelet[769]: E1029 09:36:26.609286     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-grzt8_kubernetes-dashboard(dd216e78-4d58-4289-97f3-d4160569b000)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-grzt8" podUID="dd216e78-4d58-4289-97f3-d4160569b000"
	Oct 29 09:36:32 no-preload-505993 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 29 09:36:32 no-preload-505993 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 29 09:36:32 no-preload-505993 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [c16295e57713c6fa7f970f1681964ea937293af9f41f054bac638fc23c2a75e1] <==
	2025/10/29 09:35:56 Using namespace: kubernetes-dashboard
	2025/10/29 09:35:56 Using in-cluster config to connect to apiserver
	2025/10/29 09:35:56 Using secret token for csrf signing
	2025/10/29 09:35:56 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/29 09:35:56 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/29 09:35:56 Successful initial request to the apiserver, version: v1.34.1
	2025/10/29 09:35:56 Generating JWE encryption key
	2025/10/29 09:35:56 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/29 09:35:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/29 09:35:56 Initializing JWE encryption key from synchronized object
	2025/10/29 09:35:57 Creating in-cluster Sidecar client
	2025/10/29 09:35:57 Serving insecurely on HTTP port: 9090
	2025/10/29 09:35:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/29 09:36:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/29 09:35:56 Starting overwatch
	
	
	==> storage-provisioner [121378d1386ec391bb77c32c4dcdf7ab70266c9bb6c9219a6be7ff9d90b0f763] <==
	I1029 09:35:41.913699       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1029 09:36:11.921317       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [6c858fe9343d518a3734dba79545bb4f9be5a1caad65608525b2cfbdc1cb354e] <==
	I1029 09:36:12.940287       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1029 09:36:12.962311       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1029 09:36:12.962373       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1029 09:36:12.968600       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:16.424794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:20.685567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:24.284568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:27.338522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:30.360247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:30.367471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:36:30.367663       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1029 09:36:30.367879       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-505993_fca96da2-5e48-4fa9-a734-07ed5f520709!
	W1029 09:36:30.377961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:36:30.368082       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cbaa88f5-db56-42e2-b30d-ab8c0d14deb0", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-505993_fca96da2-5e48-4fa9-a734-07ed5f520709 became leader
	W1029 09:36:30.380933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:36:30.468516       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-505993_fca96da2-5e48-4fa9-a734-07ed5f520709!
	W1029 09:36:32.387362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:32.399210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:34.403174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:34.408488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-505993 -n no-preload-505993
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-505993 -n no-preload-505993: exit status 2 (385.503827ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-505993 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-505993
helpers_test.go:243: (dbg) docker inspect no-preload-505993:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d63baf692038c6771a74322ac553fcd847fb56f516f845444c48dff9a54d272a",
	        "Created": "2025-10-29T09:33:49.110598267Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 196550,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:35:27.927885362Z",
	            "FinishedAt": "2025-10-29T09:35:27.076098991Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/d63baf692038c6771a74322ac553fcd847fb56f516f845444c48dff9a54d272a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d63baf692038c6771a74322ac553fcd847fb56f516f845444c48dff9a54d272a/hostname",
	        "HostsPath": "/var/lib/docker/containers/d63baf692038c6771a74322ac553fcd847fb56f516f845444c48dff9a54d272a/hosts",
	        "LogPath": "/var/lib/docker/containers/d63baf692038c6771a74322ac553fcd847fb56f516f845444c48dff9a54d272a/d63baf692038c6771a74322ac553fcd847fb56f516f845444c48dff9a54d272a-json.log",
	        "Name": "/no-preload-505993",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-505993:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-505993",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d63baf692038c6771a74322ac553fcd847fb56f516f845444c48dff9a54d272a",
	                "LowerDir": "/var/lib/docker/overlay2/b0823108135d7c7891d0d8e0e0ee4954f318020c6f85c95a7b1c176cc8aeeabf-init/diff:/var/lib/docker/overlay2/512c003c31c6c889d7180677d0a7d04f9641d651bb7c0f829b95ca7e47c2836c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b0823108135d7c7891d0d8e0e0ee4954f318020c6f85c95a7b1c176cc8aeeabf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b0823108135d7c7891d0d8e0e0ee4954f318020c6f85c95a7b1c176cc8aeeabf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b0823108135d7c7891d0d8e0e0ee4954f318020c6f85c95a7b1c176cc8aeeabf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-505993",
	                "Source": "/var/lib/docker/volumes/no-preload-505993/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-505993",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-505993",
	                "name.minikube.sigs.k8s.io": "no-preload-505993",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f744d638eb39c80f84a212cff9e20b45e7a58976f72797151872ca156b059803",
	            "SandboxKey": "/var/run/docker/netns/f744d638eb39",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-505993": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6a:d2:6b:3e:0d:8b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3147a87e4d57838736bbe9648b553b17f7ec6f1da903b525594523d0b3c2da78",
	                    "EndpointID": "b0446aa5d2a9437adf45d0df5e8ea54d780f00d02d9d6e9809b4b6e1cdebaced",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-505993",
	                        "d63baf692038"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-505993 -n no-preload-505993
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-505993 -n no-preload-505993: exit status 2 (366.089378ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-505993 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-505993 logs -n 25: (1.304305547s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-699236 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-699236    │ jenkins │ v1.37.0 │ 29 Oct 25 09:31 UTC │ 29 Oct 25 09:31 UTC │
	│ delete  │ -p cert-options-699236                                                                                                                                                                                                                        │ cert-options-699236    │ jenkins │ v1.37.0 │ 29 Oct 25 09:31 UTC │ 29 Oct 25 09:31 UTC │
	│ start   │ -p old-k8s-version-162751 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-162751 │ jenkins │ v1.37.0 │ 29 Oct 25 09:31 UTC │ 29 Oct 25 09:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-162751 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-162751 │ jenkins │ v1.37.0 │ 29 Oct 25 09:32 UTC │                     │
	│ stop    │ -p old-k8s-version-162751 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-162751 │ jenkins │ v1.37.0 │ 29 Oct 25 09:32 UTC │ 29 Oct 25 09:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-162751 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-162751 │ jenkins │ v1.37.0 │ 29 Oct 25 09:32 UTC │ 29 Oct 25 09:32 UTC │
	│ start   │ -p old-k8s-version-162751 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-162751 │ jenkins │ v1.37.0 │ 29 Oct 25 09:32 UTC │ 29 Oct 25 09:33 UTC │
	│ start   │ -p cert-expiration-690444 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-690444 │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ image   │ old-k8s-version-162751 image list --format=json                                                                                                                                                                                               │ old-k8s-version-162751 │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ pause   │ -p old-k8s-version-162751 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-162751 │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │                     │
	│ delete  │ -p old-k8s-version-162751                                                                                                                                                                                                                     │ old-k8s-version-162751 │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ delete  │ -p old-k8s-version-162751                                                                                                                                                                                                                     │ old-k8s-version-162751 │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ start   │ -p no-preload-505993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-505993      │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:35 UTC │
	│ delete  │ -p cert-expiration-690444                                                                                                                                                                                                                     │ cert-expiration-690444 │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ start   │ -p embed-certs-946178 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-946178     │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:35 UTC │
	│ addons  │ enable metrics-server -p no-preload-505993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-505993      │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │                     │
	│ stop    │ -p no-preload-505993 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-505993      │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:35 UTC │
	│ addons  │ enable dashboard -p no-preload-505993 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-505993      │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:35 UTC │
	│ start   │ -p no-preload-505993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-505993      │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-946178 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-946178     │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │                     │
	│ stop    │ -p embed-certs-946178 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-946178     │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-946178 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-946178     │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:35 UTC │
	│ start   │ -p embed-certs-946178 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-946178     │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │                     │
	│ image   │ no-preload-505993 image list --format=json                                                                                                                                                                                                    │ no-preload-505993      │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ pause   │ -p no-preload-505993 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-505993      │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:35:48
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:35:48.281778  199087 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:35:48.281977  199087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:35:48.282003  199087 out.go:374] Setting ErrFile to fd 2...
	I1029 09:35:48.282022  199087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:35:48.282303  199087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 09:35:48.282734  199087 out.go:368] Setting JSON to false
	I1029 09:35:48.283790  199087 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4700,"bootTime":1761725848,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1029 09:35:48.283892  199087 start.go:143] virtualization:  
	I1029 09:35:48.288973  199087 out.go:179] * [embed-certs-946178] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1029 09:35:48.293745  199087 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:35:48.293906  199087 notify.go:221] Checking for updates...
	I1029 09:35:48.302162  199087 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:35:48.305485  199087 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:35:48.308778  199087 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	I1029 09:35:48.311957  199087 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1029 09:35:48.315211  199087 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:35:48.318933  199087 config.go:182] Loaded profile config "embed-certs-946178": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:35:48.319489  199087 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:35:48.364168  199087 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1029 09:35:48.364274  199087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:35:48.452651  199087 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-29 09:35:48.439883026 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:35:48.452751  199087 docker.go:319] overlay module found
	I1029 09:35:48.456282  199087 out.go:179] * Using the docker driver based on existing profile
	I1029 09:35:48.459692  199087 start.go:309] selected driver: docker
	I1029 09:35:48.459710  199087 start.go:930] validating driver "docker" against &{Name:embed-certs-946178 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-946178 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:35:48.459827  199087 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:35:48.460579  199087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:35:48.543714  199087 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-29 09:35:48.532766046 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:35:48.544081  199087 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:35:48.544111  199087 cni.go:84] Creating CNI manager for ""
	I1029 09:35:48.544158  199087 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:35:48.544187  199087 start.go:353] cluster config:
	{Name:embed-certs-946178 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-946178 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:35:48.550402  199087 out.go:179] * Starting "embed-certs-946178" primary control-plane node in "embed-certs-946178" cluster
	I1029 09:35:48.554554  199087 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:35:48.557776  199087 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:35:48.560875  199087 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:35:48.560929  199087 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1029 09:35:48.560954  199087 cache.go:59] Caching tarball of preloaded images
	I1029 09:35:48.561040  199087 preload.go:233] Found /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1029 09:35:48.561049  199087 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 09:35:48.561164  199087 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/config.json ...
	I1029 09:35:48.561365  199087 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:35:48.586413  199087 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:35:48.586436  199087 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:35:48.586449  199087 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:35:48.586475  199087 start.go:360] acquireMachinesLock for embed-certs-946178: {Name:mk1c928a559dbc3bbce2e34d80593c51300c509b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:35:48.586533  199087 start.go:364] duration metric: took 36.595µs to acquireMachinesLock for "embed-certs-946178"
	I1029 09:35:48.586567  199087 start.go:96] Skipping create...Using existing machine configuration
	I1029 09:35:48.586572  199087 fix.go:54] fixHost starting: 
	I1029 09:35:48.586812  199087 cli_runner.go:164] Run: docker container inspect embed-certs-946178 --format={{.State.Status}}
	I1029 09:35:48.606032  199087 fix.go:112] recreateIfNeeded on embed-certs-946178: state=Stopped err=<nil>
	W1029 09:35:48.606059  199087 fix.go:138] unexpected machine state, will restart: <nil>
	W1029 09:35:48.644637  196421 pod_ready.go:104] pod "coredns-66bc5c9577-zpgms" is not "Ready", error: <nil>
	W1029 09:35:51.135806  196421 pod_ready.go:104] pod "coredns-66bc5c9577-zpgms" is not "Ready", error: <nil>
	I1029 09:35:48.612236  199087 out.go:252] * Restarting existing docker container for "embed-certs-946178" ...
	I1029 09:35:48.612373  199087 cli_runner.go:164] Run: docker start embed-certs-946178
	I1029 09:35:48.940132  199087 cli_runner.go:164] Run: docker container inspect embed-certs-946178 --format={{.State.Status}}
	I1029 09:35:48.968416  199087 kic.go:430] container "embed-certs-946178" state is running.
	I1029 09:35:48.968795  199087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-946178
	I1029 09:35:49.001021  199087 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/config.json ...
	I1029 09:35:49.001286  199087 machine.go:94] provisionDockerMachine start ...
	I1029 09:35:49.001362  199087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:35:49.030299  199087 main.go:143] libmachine: Using SSH client type: native
	I1029 09:35:49.030877  199087 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1029 09:35:49.030894  199087 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 09:35:49.031629  199087 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1029 09:35:52.192612  199087 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-946178
	
	I1029 09:35:52.192685  199087 ubuntu.go:182] provisioning hostname "embed-certs-946178"
	I1029 09:35:52.192778  199087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:35:52.219903  199087 main.go:143] libmachine: Using SSH client type: native
	I1029 09:35:52.220212  199087 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1029 09:35:52.220222  199087 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-946178 && echo "embed-certs-946178" | sudo tee /etc/hostname
	I1029 09:35:52.408282  199087 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-946178
	
	I1029 09:35:52.408389  199087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:35:52.434163  199087 main.go:143] libmachine: Using SSH client type: native
	I1029 09:35:52.434494  199087 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1029 09:35:52.434511  199087 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-946178' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-946178/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-946178' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 09:35:52.601505  199087 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 09:35:52.601561  199087 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-2763/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-2763/.minikube}
	I1029 09:35:52.601594  199087 ubuntu.go:190] setting up certificates
	I1029 09:35:52.601617  199087 provision.go:84] configureAuth start
	I1029 09:35:52.601706  199087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-946178
	I1029 09:35:52.626923  199087 provision.go:143] copyHostCerts
	I1029 09:35:52.627008  199087 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem, removing ...
	I1029 09:35:52.627031  199087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 09:35:52.627105  199087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem (1679 bytes)
	I1029 09:35:52.627216  199087 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem, removing ...
	I1029 09:35:52.627229  199087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 09:35:52.627260  199087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem (1082 bytes)
	I1029 09:35:52.627331  199087 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem, removing ...
	I1029 09:35:52.627341  199087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 09:35:52.627369  199087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem (1123 bytes)
	I1029 09:35:52.627537  199087 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem org=jenkins.embed-certs-946178 san=[127.0.0.1 192.168.85.2 embed-certs-946178 localhost minikube]
	I1029 09:35:53.811225  199087 provision.go:177] copyRemoteCerts
	I1029 09:35:53.811341  199087 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 09:35:53.811423  199087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:35:53.830404  199087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/embed-certs-946178/id_rsa Username:docker}
	I1029 09:35:53.948578  199087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 09:35:53.982501  199087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1029 09:35:54.005434  199087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1029 09:35:54.029846  199087 provision.go:87] duration metric: took 1.42820267s to configureAuth
	I1029 09:35:54.029926  199087 ubuntu.go:206] setting minikube options for container-runtime
	I1029 09:35:54.030178  199087 config.go:182] Loaded profile config "embed-certs-946178": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:35:54.030346  199087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:35:54.055802  199087 main.go:143] libmachine: Using SSH client type: native
	I1029 09:35:54.056105  199087 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1029 09:35:54.056120  199087 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 09:35:54.581632  199087 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 09:35:54.581657  199087 machine.go:97] duration metric: took 5.580359663s to provisionDockerMachine
	I1029 09:35:54.581667  199087 start.go:293] postStartSetup for "embed-certs-946178" (driver="docker")
	I1029 09:35:54.581678  199087 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 09:35:54.581738  199087 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 09:35:54.581786  199087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:35:54.612249  199087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/embed-certs-946178/id_rsa Username:docker}
	I1029 09:35:54.745834  199087 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 09:35:54.757194  199087 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 09:35:54.757225  199087 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 09:35:54.757236  199087 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/addons for local assets ...
	I1029 09:35:54.757287  199087 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/files for local assets ...
	I1029 09:35:54.757390  199087 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> 45502.pem in /etc/ssl/certs
	I1029 09:35:54.757503  199087 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 09:35:54.768962  199087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /etc/ssl/certs/45502.pem (1708 bytes)
	I1029 09:35:54.793298  199087 start.go:296] duration metric: took 211.615976ms for postStartSetup
	I1029 09:35:54.793376  199087 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 09:35:54.793421  199087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:35:54.813719  199087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/embed-certs-946178/id_rsa Username:docker}
	I1029 09:35:54.924786  199087 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 09:35:54.932522  199087 fix.go:56] duration metric: took 6.345943092s for fixHost
	I1029 09:35:54.932547  199087 start.go:83] releasing machines lock for "embed-certs-946178", held for 6.345990591s
	I1029 09:35:54.932614  199087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-946178
	I1029 09:35:54.973583  199087 ssh_runner.go:195] Run: cat /version.json
	I1029 09:35:54.973641  199087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:35:54.973849  199087 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 09:35:54.973913  199087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:35:55.019733  199087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/embed-certs-946178/id_rsa Username:docker}
	I1029 09:35:55.024988  199087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/embed-certs-946178/id_rsa Username:docker}
	I1029 09:35:55.144706  199087 ssh_runner.go:195] Run: systemctl --version
	I1029 09:35:55.254917  199087 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 09:35:55.330970  199087 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 09:35:55.336296  199087 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 09:35:55.336378  199087 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 09:35:55.344160  199087 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1029 09:35:55.344184  199087 start.go:496] detecting cgroup driver to use...
	I1029 09:35:55.344215  199087 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1029 09:35:55.344262  199087 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 09:35:55.365005  199087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 09:35:55.380728  199087 docker.go:218] disabling cri-docker service (if available) ...
	I1029 09:35:55.380843  199087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 09:35:55.398031  199087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 09:35:55.411982  199087 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 09:35:55.605459  199087 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 09:35:55.787029  199087 docker.go:234] disabling docker service ...
	I1029 09:35:55.787101  199087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 09:35:55.803082  199087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 09:35:55.825022  199087 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 09:35:55.973930  199087 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 09:35:56.157734  199087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 09:35:56.172992  199087 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 09:35:56.188517  199087 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 09:35:56.188590  199087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:35:56.198680  199087 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1029 09:35:56.198743  199087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:35:56.208424  199087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:35:56.218024  199087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:35:56.228580  199087 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 09:35:56.238298  199087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:35:56.251992  199087 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:35:56.265334  199087 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:35:56.276023  199087 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 09:35:56.285474  199087 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 09:35:56.294330  199087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:35:56.419802  199087 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 09:35:56.690951  199087 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 09:35:56.691045  199087 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 09:35:56.699947  199087 start.go:564] Will wait 60s for crictl version
	I1029 09:35:56.700062  199087 ssh_runner.go:195] Run: which crictl
	I1029 09:35:56.704151  199087 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 09:35:56.766117  199087 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 09:35:56.766216  199087 ssh_runner.go:195] Run: crio --version
	I1029 09:35:56.834470  199087 ssh_runner.go:195] Run: crio --version
	I1029 09:35:56.880606  199087 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1029 09:35:53.143547  196421 pod_ready.go:104] pod "coredns-66bc5c9577-zpgms" is not "Ready", error: <nil>
	W1029 09:35:55.645885  196421 pod_ready.go:104] pod "coredns-66bc5c9577-zpgms" is not "Ready", error: <nil>
	I1029 09:35:56.883623  199087 cli_runner.go:164] Run: docker network inspect embed-certs-946178 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:35:56.902479  199087 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1029 09:35:56.907902  199087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:35:56.929126  199087 kubeadm.go:884] updating cluster {Name:embed-certs-946178 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-946178 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 09:35:56.929298  199087 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:35:56.929381  199087 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:35:56.978907  199087 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:35:56.978928  199087 crio.go:433] Images already preloaded, skipping extraction
	I1029 09:35:56.978985  199087 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:35:57.023656  199087 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:35:57.023682  199087 cache_images.go:86] Images are preloaded, skipping loading
	I1029 09:35:57.023691  199087 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1029 09:35:57.023804  199087 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-946178 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-946178 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 09:35:57.023884  199087 ssh_runner.go:195] Run: crio config
	I1029 09:35:57.077428  199087 cni.go:84] Creating CNI manager for ""
	I1029 09:35:57.077494  199087 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:35:57.077528  199087 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1029 09:35:57.077555  199087 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-946178 NodeName:embed-certs-946178 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 09:35:57.077714  199087 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-946178"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 09:35:57.077787  199087 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 09:35:57.086234  199087 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 09:35:57.086355  199087 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 09:35:57.094218  199087 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1029 09:35:57.107464  199087 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 09:35:57.120972  199087 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1029 09:35:57.137259  199087 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1029 09:35:57.141362  199087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:35:57.151543  199087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:35:57.275401  199087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:35:57.291579  199087 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178 for IP: 192.168.85.2
	I1029 09:35:57.291662  199087 certs.go:195] generating shared ca certs ...
	I1029 09:35:57.291693  199087 certs.go:227] acquiring lock for ca certs: {Name:mk715611293338c39436fe9072c5ce0c2fb993a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:35:57.291882  199087 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key
	I1029 09:35:57.291952  199087 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key
	I1029 09:35:57.291988  199087 certs.go:257] generating profile certs ...
	I1029 09:35:57.292114  199087 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/client.key
	I1029 09:35:57.292220  199087 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/apiserver.key.8f5fae26
	I1029 09:35:57.292285  199087 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/proxy-client.key
	I1029 09:35:57.292459  199087 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem (1338 bytes)
	W1029 09:35:57.292520  199087 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550_empty.pem, impossibly tiny 0 bytes
	I1029 09:35:57.292538  199087 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem (1679 bytes)
	I1029 09:35:57.292579  199087 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem (1082 bytes)
	I1029 09:35:57.292612  199087 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem (1123 bytes)
	I1029 09:35:57.292652  199087 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem (1679 bytes)
	I1029 09:35:57.292701  199087 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem (1708 bytes)
	I1029 09:35:57.293248  199087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 09:35:57.315401  199087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 09:35:57.336596  199087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 09:35:57.357708  199087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1029 09:35:57.379108  199087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1029 09:35:57.405897  199087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1029 09:35:57.430451  199087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 09:35:57.452389  199087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/embed-certs-946178/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1029 09:35:57.479026  199087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem --> /usr/share/ca-certificates/4550.pem (1338 bytes)
	I1029 09:35:57.508705  199087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /usr/share/ca-certificates/45502.pem (1708 bytes)
	I1029 09:35:57.532174  199087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 09:35:57.552388  199087 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 09:35:57.567511  199087 ssh_runner.go:195] Run: openssl version
	I1029 09:35:57.573992  199087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4550.pem && ln -fs /usr/share/ca-certificates/4550.pem /etc/ssl/certs/4550.pem"
	I1029 09:35:57.582449  199087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4550.pem
	I1029 09:35:57.586402  199087 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:27 /usr/share/ca-certificates/4550.pem
	I1029 09:35:57.586515  199087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4550.pem
	I1029 09:35:57.636129  199087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4550.pem /etc/ssl/certs/51391683.0"
	I1029 09:35:57.644583  199087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/45502.pem && ln -fs /usr/share/ca-certificates/45502.pem /etc/ssl/certs/45502.pem"
	I1029 09:35:57.653468  199087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/45502.pem
	I1029 09:35:57.658935  199087 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:27 /usr/share/ca-certificates/45502.pem
	I1029 09:35:57.659005  199087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/45502.pem
	I1029 09:35:57.700572  199087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/45502.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 09:35:57.708600  199087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 09:35:57.720410  199087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:35:57.724400  199087 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:35:57.724516  199087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:35:57.766355  199087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 09:35:57.774308  199087 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 09:35:57.778039  199087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1029 09:35:57.819869  199087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1029 09:35:57.862392  199087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1029 09:35:57.904144  199087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1029 09:35:57.951415  199087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1029 09:35:58.020064  199087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1029 09:35:58.107163  199087 kubeadm.go:401] StartCluster: {Name:embed-certs-946178 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-946178 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:35:58.107259  199087 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 09:35:58.107335  199087 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 09:35:58.182751  199087 cri.go:89] found id: "8fb3490c8a2c3fa9b6f908aac7af524a8a6b713d4b1306444595caf0ed320c15"
	I1029 09:35:58.182775  199087 cri.go:89] found id: "1eca250e7dd68ca1de609c5e6810695c68eaea3b51a86f93331e6d7205acad68"
	I1029 09:35:58.182781  199087 cri.go:89] found id: "0d84906ed693bbd1f66a0d46ac97dbb716c04201acaa1b9f85ffecdd60d49365"
	I1029 09:35:58.182785  199087 cri.go:89] found id: "9ba572ee5a49b071c9887b1b7536d698adcfa4c4fe872393a5200107f89ce91a"
	I1029 09:35:58.182797  199087 cri.go:89] found id: ""
	I1029 09:35:58.182848  199087 ssh_runner.go:195] Run: sudo runc list -f json
	W1029 09:35:58.229289  199087 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:35:58Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:35:58.229385  199087 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 09:35:58.250707  199087 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1029 09:35:58.250732  199087 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1029 09:35:58.250786  199087 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1029 09:35:58.263076  199087 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1029 09:35:58.263655  199087 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-946178" does not appear in /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:35:58.263899  199087 kubeconfig.go:62] /home/jenkins/minikube-integration/21800-2763/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-946178" cluster setting kubeconfig missing "embed-certs-946178" context setting]
	I1029 09:35:58.264427  199087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:35:58.265778  199087 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1029 09:35:58.282215  199087 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1029 09:35:58.282252  199087 kubeadm.go:602] duration metric: took 31.513344ms to restartPrimaryControlPlane
	I1029 09:35:58.282262  199087 kubeadm.go:403] duration metric: took 175.10849ms to StartCluster
	I1029 09:35:58.282277  199087 settings.go:142] acquiring lock: {Name:mkeba1c0d8e3656c2a05b6b6f1f81184498df216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:35:58.282356  199087 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:35:58.283658  199087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:35:58.285827  199087 config.go:182] Loaded profile config "embed-certs-946178": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:35:58.285897  199087 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:35:58.285943  199087 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 09:35:58.286001  199087 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-946178"
	I1029 09:35:58.286019  199087 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-946178"
	W1029 09:35:58.286032  199087 addons.go:248] addon storage-provisioner should already be in state true
	I1029 09:35:58.286052  199087 host.go:66] Checking if "embed-certs-946178" exists ...
	I1029 09:35:58.286520  199087 cli_runner.go:164] Run: docker container inspect embed-certs-946178 --format={{.State.Status}}
	I1029 09:35:58.287349  199087 addons.go:70] Setting dashboard=true in profile "embed-certs-946178"
	I1029 09:35:58.287376  199087 addons.go:239] Setting addon dashboard=true in "embed-certs-946178"
	W1029 09:35:58.287384  199087 addons.go:248] addon dashboard should already be in state true
	I1029 09:35:58.287417  199087 host.go:66] Checking if "embed-certs-946178" exists ...
	I1029 09:35:58.287868  199087 cli_runner.go:164] Run: docker container inspect embed-certs-946178 --format={{.State.Status}}
	I1029 09:35:58.290649  199087 addons.go:70] Setting default-storageclass=true in profile "embed-certs-946178"
	I1029 09:35:58.290689  199087 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-946178"
	I1029 09:35:58.291015  199087 cli_runner.go:164] Run: docker container inspect embed-certs-946178 --format={{.State.Status}}
	I1029 09:35:58.295704  199087 out.go:179] * Verifying Kubernetes components...
	I1029 09:35:58.302529  199087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:35:58.340300  199087 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1029 09:35:58.343788  199087 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1029 09:35:58.347803  199087 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1029 09:35:58.347831  199087 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1029 09:35:58.347909  199087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:35:58.353151  199087 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1029 09:35:58.137205  196421 pod_ready.go:104] pod "coredns-66bc5c9577-zpgms" is not "Ready", error: <nil>
	W1029 09:36:00.139889  196421 pod_ready.go:104] pod "coredns-66bc5c9577-zpgms" is not "Ready", error: <nil>
	W1029 09:36:02.636720  196421 pod_ready.go:104] pod "coredns-66bc5c9577-zpgms" is not "Ready", error: <nil>
	I1029 09:35:58.356158  199087 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:35:58.356182  199087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1029 09:35:58.356249  199087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:35:58.365481  199087 addons.go:239] Setting addon default-storageclass=true in "embed-certs-946178"
	W1029 09:35:58.365516  199087 addons.go:248] addon default-storageclass should already be in state true
	I1029 09:35:58.365541  199087 host.go:66] Checking if "embed-certs-946178" exists ...
	I1029 09:35:58.365964  199087 cli_runner.go:164] Run: docker container inspect embed-certs-946178 --format={{.State.Status}}
	I1029 09:35:58.409815  199087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/embed-certs-946178/id_rsa Username:docker}
	I1029 09:35:58.424508  199087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/embed-certs-946178/id_rsa Username:docker}
	I1029 09:35:58.426960  199087 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1029 09:35:58.426981  199087 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1029 09:35:58.427081  199087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:35:58.466156  199087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/embed-certs-946178/id_rsa Username:docker}
	I1029 09:35:58.662597  199087 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1029 09:35:58.662638  199087 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1029 09:35:58.685728  199087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:35:58.705440  199087 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1029 09:35:58.705467  199087 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1029 09:35:58.713533  199087 node_ready.go:35] waiting up to 6m0s for node "embed-certs-946178" to be "Ready" ...
	I1029 09:35:58.718161  199087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1029 09:35:58.761569  199087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:35:58.782548  199087 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1029 09:35:58.782575  199087 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1029 09:35:58.882167  199087 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1029 09:35:58.882194  199087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1029 09:35:58.942090  199087 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1029 09:35:58.942119  199087 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1029 09:35:58.966042  199087 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1029 09:35:58.966067  199087 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1029 09:35:58.984574  199087 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1029 09:35:58.984603  199087 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1029 09:35:59.013785  199087 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1029 09:35:59.013825  199087 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1029 09:35:59.045809  199087 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1029 09:35:59.045846  199087 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1029 09:35:59.074143  199087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1029 09:36:03.120944  199087 node_ready.go:49] node "embed-certs-946178" is "Ready"
	I1029 09:36:03.121028  199087 node_ready.go:38] duration metric: took 4.407436324s for node "embed-certs-946178" to be "Ready" ...
	I1029 09:36:03.121056  199087 api_server.go:52] waiting for apiserver process to appear ...
	I1029 09:36:03.121144  199087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:36:03.347717  199087 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.62951775s)
	I1029 09:36:04.795395  199087 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.033777088s)
	I1029 09:36:04.795611  199087 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.721434564s)
	I1029 09:36:04.795642  199087 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.674462794s)
	I1029 09:36:04.795838  199087 api_server.go:72] duration metric: took 6.509913302s to wait for apiserver process to appear ...
	I1029 09:36:04.795852  199087 api_server.go:88] waiting for apiserver healthz status ...
	I1029 09:36:04.795874  199087 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:36:04.799253  199087 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-946178 addons enable metrics-server
	
	I1029 09:36:04.802655  199087 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	W1029 09:36:04.637124  196421 pod_ready.go:104] pod "coredns-66bc5c9577-zpgms" is not "Ready", error: <nil>
	W1029 09:36:07.135687  196421 pod_ready.go:104] pod "coredns-66bc5c9577-zpgms" is not "Ready", error: <nil>
	I1029 09:36:04.805758  199087 addons.go:515] duration metric: took 6.519793037s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1029 09:36:04.814571  199087 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1029 09:36:04.816043  199087 api_server.go:141] control plane version: v1.34.1
	I1029 09:36:04.816094  199087 api_server.go:131] duration metric: took 20.234303ms to wait for apiserver health ...
	I1029 09:36:04.816117  199087 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 09:36:04.821164  199087 system_pods.go:59] 8 kube-system pods found
	I1029 09:36:04.821242  199087 system_pods.go:61] "coredns-66bc5c9577-fszff" [20eec5cd-ff72-435d-8bf3-186261f7029b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:36:04.821270  199087 system_pods.go:61] "etcd-embed-certs-946178" [0d9dac68-e3a7-4602-b820-9b5f6d8a637c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:36:04.821309  199087 system_pods.go:61] "kindnet-8lf6r" [67b8d2ab-954a-4f88-9ef0-fd96b500d79d] Running
	I1029 09:36:04.821336  199087 system_pods.go:61] "kube-apiserver-embed-certs-946178" [e774ef45-3d15-4691-aeda-044539edf25c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:36:04.821361  199087 system_pods.go:61] "kube-controller-manager-embed-certs-946178" [a7cbe94f-cfdb-421f-a335-7e796ce50d35] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:36:04.821389  199087 system_pods.go:61] "kube-proxy-8zwf2" [3571c7e9-109e-43ac-8a13-fbf0f2c0b2f2] Running
	I1029 09:36:04.821420  199087 system_pods.go:61] "kube-scheduler-embed-certs-946178" [406787cb-5f66-4c15-9938-0f4ed33dab0e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:36:04.821447  199087 system_pods.go:61] "storage-provisioner" [b2401761-29ab-456b-9542-f90d10c5c3dd] Running
	I1029 09:36:04.821480  199087 system_pods.go:74] duration metric: took 5.343957ms to wait for pod list to return data ...
	I1029 09:36:04.821503  199087 default_sa.go:34] waiting for default service account to be created ...
	I1029 09:36:04.824124  199087 default_sa.go:45] found service account: "default"
	I1029 09:36:04.824188  199087 default_sa.go:55] duration metric: took 2.655211ms for default service account to be created ...
	I1029 09:36:04.824213  199087 system_pods.go:116] waiting for k8s-apps to be running ...
	I1029 09:36:04.921125  199087 system_pods.go:86] 8 kube-system pods found
	I1029 09:36:04.921209  199087 system_pods.go:89] "coredns-66bc5c9577-fszff" [20eec5cd-ff72-435d-8bf3-186261f7029b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:36:04.921237  199087 system_pods.go:89] "etcd-embed-certs-946178" [0d9dac68-e3a7-4602-b820-9b5f6d8a637c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:36:04.921280  199087 system_pods.go:89] "kindnet-8lf6r" [67b8d2ab-954a-4f88-9ef0-fd96b500d79d] Running
	I1029 09:36:04.921311  199087 system_pods.go:89] "kube-apiserver-embed-certs-946178" [e774ef45-3d15-4691-aeda-044539edf25c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:36:04.921336  199087 system_pods.go:89] "kube-controller-manager-embed-certs-946178" [a7cbe94f-cfdb-421f-a335-7e796ce50d35] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:36:04.921371  199087 system_pods.go:89] "kube-proxy-8zwf2" [3571c7e9-109e-43ac-8a13-fbf0f2c0b2f2] Running
	I1029 09:36:04.921398  199087 system_pods.go:89] "kube-scheduler-embed-certs-946178" [406787cb-5f66-4c15-9938-0f4ed33dab0e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:36:04.921421  199087 system_pods.go:89] "storage-provisioner" [b2401761-29ab-456b-9542-f90d10c5c3dd] Running
	I1029 09:36:04.921458  199087 system_pods.go:126] duration metric: took 97.226578ms to wait for k8s-apps to be running ...
	I1029 09:36:04.921485  199087 system_svc.go:44] waiting for kubelet service to be running ....
	I1029 09:36:04.921571  199087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:36:04.935336  199087 system_svc.go:56] duration metric: took 13.830883ms WaitForService to wait for kubelet
	I1029 09:36:04.935422  199087 kubeadm.go:587] duration metric: took 6.649496346s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:36:04.935457  199087 node_conditions.go:102] verifying NodePressure condition ...
	I1029 09:36:04.939101  199087 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1029 09:36:04.939180  199087 node_conditions.go:123] node cpu capacity is 2
	I1029 09:36:04.939224  199087 node_conditions.go:105] duration metric: took 3.732447ms to run NodePressure ...
	I1029 09:36:04.939253  199087 start.go:242] waiting for startup goroutines ...
	I1029 09:36:04.939288  199087 start.go:247] waiting for cluster config update ...
	I1029 09:36:04.939319  199087 start.go:256] writing updated cluster config ...
	I1029 09:36:04.939682  199087 ssh_runner.go:195] Run: rm -f paused
	I1029 09:36:04.943594  199087 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:36:04.947923  199087 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fszff" in "kube-system" namespace to be "Ready" or be gone ...
	W1029 09:36:06.953672  199087 pod_ready.go:104] pod "coredns-66bc5c9577-fszff" is not "Ready", error: <nil>
	W1029 09:36:09.136245  196421 pod_ready.go:104] pod "coredns-66bc5c9577-zpgms" is not "Ready", error: <nil>
	W1029 09:36:11.635803  196421 pod_ready.go:104] pod "coredns-66bc5c9577-zpgms" is not "Ready", error: <nil>
	W1029 09:36:09.457602  199087 pod_ready.go:104] pod "coredns-66bc5c9577-fszff" is not "Ready", error: <nil>
	W1029 09:36:11.953306  199087 pod_ready.go:104] pod "coredns-66bc5c9577-fszff" is not "Ready", error: <nil>
	W1029 09:36:13.640898  196421 pod_ready.go:104] pod "coredns-66bc5c9577-zpgms" is not "Ready", error: <nil>
	W1029 09:36:16.136675  196421 pod_ready.go:104] pod "coredns-66bc5c9577-zpgms" is not "Ready", error: <nil>
	W1029 09:36:13.954106  199087 pod_ready.go:104] pod "coredns-66bc5c9577-fszff" is not "Ready", error: <nil>
	W1029 09:36:16.454112  199087 pod_ready.go:104] pod "coredns-66bc5c9577-fszff" is not "Ready", error: <nil>
	W1029 09:36:18.136792  196421 pod_ready.go:104] pod "coredns-66bc5c9577-zpgms" is not "Ready", error: <nil>
	I1029 09:36:19.135297  196421 pod_ready.go:94] pod "coredns-66bc5c9577-zpgms" is "Ready"
	I1029 09:36:19.135385  196421 pod_ready.go:86] duration metric: took 36.504985007s for pod "coredns-66bc5c9577-zpgms" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:36:19.138203  196421 pod_ready.go:83] waiting for pod "etcd-no-preload-505993" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:36:19.142659  196421 pod_ready.go:94] pod "etcd-no-preload-505993" is "Ready"
	I1029 09:36:19.142688  196421 pod_ready.go:86] duration metric: took 4.454636ms for pod "etcd-no-preload-505993" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:36:19.145156  196421 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-505993" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:36:19.149702  196421 pod_ready.go:94] pod "kube-apiserver-no-preload-505993" is "Ready"
	I1029 09:36:19.149729  196421 pod_ready.go:86] duration metric: took 4.510932ms for pod "kube-apiserver-no-preload-505993" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:36:19.151811  196421 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-505993" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:36:19.334222  196421 pod_ready.go:94] pod "kube-controller-manager-no-preload-505993" is "Ready"
	I1029 09:36:19.334251  196421 pod_ready.go:86] duration metric: took 182.384156ms for pod "kube-controller-manager-no-preload-505993" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:36:19.533498  196421 pod_ready.go:83] waiting for pod "kube-proxy-r6974" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:36:19.933475  196421 pod_ready.go:94] pod "kube-proxy-r6974" is "Ready"
	I1029 09:36:19.933503  196421 pod_ready.go:86] duration metric: took 399.977082ms for pod "kube-proxy-r6974" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:36:20.133492  196421 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-505993" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:36:20.534232  196421 pod_ready.go:94] pod "kube-scheduler-no-preload-505993" is "Ready"
	I1029 09:36:20.534263  196421 pod_ready.go:86] duration metric: took 400.741183ms for pod "kube-scheduler-no-preload-505993" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:36:20.534277  196421 pod_ready.go:40] duration metric: took 37.962294444s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:36:20.587461  196421 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1029 09:36:20.591367  196421 out.go:179] * Done! kubectl is now configured to use "no-preload-505993" cluster and "default" namespace by default
	W1029 09:36:18.455150  199087 pod_ready.go:104] pod "coredns-66bc5c9577-fszff" is not "Ready", error: <nil>
	W1029 09:36:20.954006  199087 pod_ready.go:104] pod "coredns-66bc5c9577-fszff" is not "Ready", error: <nil>
	W1029 09:36:22.954461  199087 pod_ready.go:104] pod "coredns-66bc5c9577-fszff" is not "Ready", error: <nil>
	W1029 09:36:25.453318  199087 pod_ready.go:104] pod "coredns-66bc5c9577-fszff" is not "Ready", error: <nil>
	W1029 09:36:27.456759  199087 pod_ready.go:104] pod "coredns-66bc5c9577-fszff" is not "Ready", error: <nil>
	W1029 09:36:29.954045  199087 pod_ready.go:104] pod "coredns-66bc5c9577-fszff" is not "Ready", error: <nil>
	W1029 09:36:32.455787  199087 pod_ready.go:104] pod "coredns-66bc5c9577-fszff" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 29 09:36:12 no-preload-505993 crio[649]: time="2025-10-29T09:36:12.855494728Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5e387a0c-3ef4-44b9-9cf3-a30a3e2bf424 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:36:12 no-preload-505993 crio[649]: time="2025-10-29T09:36:12.858022799Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=50fdb341-b88e-47b2-aa04-497efadcf7de name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:36:12 no-preload-505993 crio[649]: time="2025-10-29T09:36:12.858127662Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:36:12 no-preload-505993 crio[649]: time="2025-10-29T09:36:12.874809105Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:36:12 no-preload-505993 crio[649]: time="2025-10-29T09:36:12.875255173Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/dcc5bf377091740883291190dc19162888d515d3c4b382e1b14e2e1c25b4ca2e/merged/etc/passwd: no such file or directory"
	Oct 29 09:36:12 no-preload-505993 crio[649]: time="2025-10-29T09:36:12.875431609Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/dcc5bf377091740883291190dc19162888d515d3c4b382e1b14e2e1c25b4ca2e/merged/etc/group: no such file or directory"
	Oct 29 09:36:12 no-preload-505993 crio[649]: time="2025-10-29T09:36:12.877097954Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:36:12 no-preload-505993 crio[649]: time="2025-10-29T09:36:12.915898082Z" level=info msg="Created container 6c858fe9343d518a3734dba79545bb4f9be5a1caad65608525b2cfbdc1cb354e: kube-system/storage-provisioner/storage-provisioner" id=50fdb341-b88e-47b2-aa04-497efadcf7de name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:36:12 no-preload-505993 crio[649]: time="2025-10-29T09:36:12.917290095Z" level=info msg="Starting container: 6c858fe9343d518a3734dba79545bb4f9be5a1caad65608525b2cfbdc1cb354e" id=ff59f343-988c-4e11-98e4-5bb90c431342 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:36:12 no-preload-505993 crio[649]: time="2025-10-29T09:36:12.923858202Z" level=info msg="Started container" PID=1648 containerID=6c858fe9343d518a3734dba79545bb4f9be5a1caad65608525b2cfbdc1cb354e description=kube-system/storage-provisioner/storage-provisioner id=ff59f343-988c-4e11-98e4-5bb90c431342 name=/runtime.v1.RuntimeService/StartContainer sandboxID=09d87b325636520084586d3b547e5bd967104258c89fac6114e699ad20c7b6d6
	Oct 29 09:36:22 no-preload-505993 crio[649]: time="2025-10-29T09:36:22.256829168Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:36:22 no-preload-505993 crio[649]: time="2025-10-29T09:36:22.26120556Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:36:22 no-preload-505993 crio[649]: time="2025-10-29T09:36:22.261365471Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:36:22 no-preload-505993 crio[649]: time="2025-10-29T09:36:22.261398497Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:36:22 no-preload-505993 crio[649]: time="2025-10-29T09:36:22.264808997Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:36:22 no-preload-505993 crio[649]: time="2025-10-29T09:36:22.264845945Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:36:22 no-preload-505993 crio[649]: time="2025-10-29T09:36:22.264875967Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:36:22 no-preload-505993 crio[649]: time="2025-10-29T09:36:22.268743087Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:36:22 no-preload-505993 crio[649]: time="2025-10-29T09:36:22.268794066Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:36:22 no-preload-505993 crio[649]: time="2025-10-29T09:36:22.268816663Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:36:22 no-preload-505993 crio[649]: time="2025-10-29T09:36:22.271884817Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:36:22 no-preload-505993 crio[649]: time="2025-10-29T09:36:22.271918409Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:36:22 no-preload-505993 crio[649]: time="2025-10-29T09:36:22.27194131Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:36:22 no-preload-505993 crio[649]: time="2025-10-29T09:36:22.28049908Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:36:22 no-preload-505993 crio[649]: time="2025-10-29T09:36:22.280624693Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	6c858fe9343d5       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           24 seconds ago       Running             storage-provisioner         2                   09d87b3256365       storage-provisioner                          kube-system
	0b803e843ffd6       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           28 seconds ago       Exited              dashboard-metrics-scraper   2                   5d3b41bce9d55       dashboard-metrics-scraper-6ffb444bf9-grzt8   kubernetes-dashboard
	c16295e57713c       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   41 seconds ago       Running             kubernetes-dashboard        0                   9ab79a9bb5be3       kubernetes-dashboard-855c9754f9-985l5        kubernetes-dashboard
	5cbe8ea853be2       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   e9811abb55348       busybox                                      default
	fb26471a716c9       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           55 seconds ago       Running             coredns                     1                   5176d832bb8a9       coredns-66bc5c9577-zpgms                     kube-system
	3d1bab9263ded       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   cc67ad8e252bf       kindnet-9z7ks                                kube-system
	2431d11a99c39       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           55 seconds ago       Running             kube-proxy                  1                   d307942825f7b       kube-proxy-r6974                             kube-system
	121378d1386ec       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           55 seconds ago       Exited              storage-provisioner         1                   09d87b3256365       storage-provisioner                          kube-system
	f23cc204350b0       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   b6f6e07d6e847       kube-controller-manager-no-preload-505993    kube-system
	d0806a4c4d5e1       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   b372655fcba27       kube-apiserver-no-preload-505993             kube-system
	dde28c2b4cc40       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   7aabf0a2c3ae8       etcd-no-preload-505993                       kube-system
	de2e4bbcf70fb       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   33b0528c46dd8       kube-scheduler-no-preload-505993             kube-system
	
	
	==> coredns [fb26471a716c99b053f80007a29cfb0be111d8091a8005d0e65f204374cad040] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36159 - 36356 "HINFO IN 3767339234654625109.1964683783820827812. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.077778462s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-505993
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-505993
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=no-preload-505993
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_34_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:34:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-505993
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:36:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:36:32 +0000   Wed, 29 Oct 2025 09:34:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:36:32 +0000   Wed, 29 Oct 2025 09:34:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:36:32 +0000   Wed, 29 Oct 2025 09:34:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 09:36:32 +0000   Wed, 29 Oct 2025 09:35:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-505993
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                389ba72a-ee76-4894-8bbe-d133735524b8
	  Boot ID:                    dcb5a4aa-84b0-4edf-aeb7-a96ccf6c0882
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-zpgms                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     111s
	  kube-system                 etcd-no-preload-505993                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         116s
	  kube-system                 kindnet-9z7ks                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-no-preload-505993              250m (12%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-no-preload-505993     200m (10%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-r6974                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-no-preload-505993              100m (5%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-grzt8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-985l5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 109s                 kube-proxy       
	  Normal   Starting                 55s                  kube-proxy       
	  Normal   Starting                 2m8s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m8s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m8s (x8 over 2m8s)  kubelet          Node no-preload-505993 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m8s (x8 over 2m8s)  kubelet          Node no-preload-505993 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m8s (x8 over 2m8s)  kubelet          Node no-preload-505993 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    116s                 kubelet          Node no-preload-505993 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 116s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  116s                 kubelet          Node no-preload-505993 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     116s                 kubelet          Node no-preload-505993 status is now: NodeHasSufficientPID
	  Normal   Starting                 116s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           112s                 node-controller  Node no-preload-505993 event: Registered Node no-preload-505993 in Controller
	  Normal   NodeReady                96s                  kubelet          Node no-preload-505993 status is now: NodeReady
	  Normal   Starting                 62s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 62s)    kubelet          Node no-preload-505993 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 62s)    kubelet          Node no-preload-505993 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 62s)    kubelet          Node no-preload-505993 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                  node-controller  Node no-preload-505993 event: Registered Node no-preload-505993 in Controller
	
	
	==> dmesg <==
	[Oct29 09:08] overlayfs: idmapped layers are currently not supported
	[Oct29 09:10] overlayfs: idmapped layers are currently not supported
	[ +24.018500] overlayfs: idmapped layers are currently not supported
	[  +4.070732] overlayfs: idmapped layers are currently not supported
	[Oct29 09:11] overlayfs: idmapped layers are currently not supported
	[ +18.424492] overlayfs: idmapped layers are currently not supported
	[  +4.342269] hrtimer: interrupt took 2289025 ns
	[Oct29 09:12] overlayfs: idmapped layers are currently not supported
	[Oct29 09:13] overlayfs: idmapped layers are currently not supported
	[Oct29 09:14] overlayfs: idmapped layers are currently not supported
	[Oct29 09:20] overlayfs: idmapped layers are currently not supported
	[Oct29 09:23] overlayfs: idmapped layers are currently not supported
	[Oct29 09:24] overlayfs: idmapped layers are currently not supported
	[ +30.917844] overlayfs: idmapped layers are currently not supported
	[Oct29 09:27] overlayfs: idmapped layers are currently not supported
	[Oct29 09:29] overlayfs: idmapped layers are currently not supported
	[Oct29 09:30] overlayfs: idmapped layers are currently not supported
	[  +5.608805] overlayfs: idmapped layers are currently not supported
	[ +37.422429] overlayfs: idmapped layers are currently not supported
	[Oct29 09:31] overlayfs: idmapped layers are currently not supported
	[Oct29 09:32] overlayfs: idmapped layers are currently not supported
	[Oct29 09:34] overlayfs: idmapped layers are currently not supported
	[ +22.728709] overlayfs: idmapped layers are currently not supported
	[Oct29 09:35] overlayfs: idmapped layers are currently not supported
	[ +21.902387] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [dde28c2b4cc40af65ac06f06ec71c70d2e4934a002e393f3f98a4ea31fa0d591] <==
	{"level":"warn","ts":"2025-10-29T09:35:39.645412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:39.696046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:39.721043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:39.752869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:39.788240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:39.820614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:39.848543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:39.864564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:39.891618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:39.940914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:39.963592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:39.977755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:40.006899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:40.017315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:40.033453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:40.051205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:40.065115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:40.085381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:40.098933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:40.149136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:40.164667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:40.189462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:40.206661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:40.222067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:35:40.274867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60594","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:36:37 up  1:19,  0 user,  load average: 3.82, 3.76, 2.89
	Linux no-preload-505993 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3d1bab9263ded9097697406f9949289734bffab4265f224193d85be8901fec23] <==
	I1029 09:35:42.061847       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:35:42.064473       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1029 09:35:42.064715       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:35:42.064761       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:35:42.064808       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:35:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:35:42.255051       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:35:42.255154       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:35:42.255191       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:35:42.256134       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1029 09:36:12.255358       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1029 09:36:12.255668       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1029 09:36:12.256936       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1029 09:36:12.257212       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1029 09:36:13.255548       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 09:36:13.255584       1 metrics.go:72] Registering metrics
	I1029 09:36:13.255653       1 controller.go:711] "Syncing nftables rules"
	I1029 09:36:22.255864       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1029 09:36:22.256542       1 main.go:301] handling current node
	I1029 09:36:32.263682       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1029 09:36:32.263714       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d0806a4c4d5e1ea92918f9224d777f2c3e94d25f663aaf70d5e9b0de3f5f3797] <==
	I1029 09:35:41.272836       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1029 09:35:41.272877       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1029 09:35:41.294695       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1029 09:35:41.299123       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1029 09:35:41.299247       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1029 09:35:41.299282       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1029 09:35:41.299294       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1029 09:35:41.299553       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1029 09:35:41.299591       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1029 09:35:41.300608       1 aggregator.go:171] initial CRD sync complete...
	I1029 09:35:41.300635       1 autoregister_controller.go:144] Starting autoregister controller
	I1029 09:35:41.300642       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1029 09:35:41.300650       1 cache.go:39] Caches are synced for autoregister controller
	E1029 09:35:41.352259       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1029 09:35:41.537784       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 09:35:41.879626       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:35:41.895064       1 controller.go:667] quota admission added evaluator for: namespaces
	I1029 09:35:42.070561       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1029 09:35:42.135408       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:35:42.162323       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 09:35:42.297110       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.187.234"}
	I1029 09:35:42.321333       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.111.179"}
	I1029 09:35:44.897527       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1029 09:35:44.946250       1 controller.go:667] quota admission added evaluator for: endpoints
	I1029 09:35:45.066265       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [f23cc204350b0e724d2d7de7e25812962bbbce24b9a5a7e022bc727f6a80b18c] <==
	I1029 09:35:44.540779       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1029 09:35:44.540791       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1029 09:35:44.542977       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1029 09:35:44.548974       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1029 09:35:44.554624       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:35:44.559869       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:35:44.562002       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1029 09:35:44.565243       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1029 09:35:44.573491       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:35:44.586867       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1029 09:35:44.586937       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1029 09:35:44.586970       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1029 09:35:44.586975       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1029 09:35:44.586980       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1029 09:35:44.589993       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1029 09:35:44.590056       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1029 09:35:44.590101       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1029 09:35:44.590125       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1029 09:35:44.590170       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:35:44.590181       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1029 09:35:44.590188       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1029 09:35:44.590253       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1029 09:35:44.590767       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1029 09:35:44.590867       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1029 09:35:44.593752       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	
	
	==> kube-proxy [2431d11a99c398651365afb64f2024dc94310b6991d7f607080b149d3ed50e0d] <==
	I1029 09:35:42.283466       1 server_linux.go:53] "Using iptables proxy"
	I1029 09:35:42.460817       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 09:35:42.562177       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 09:35:42.565422       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1029 09:35:42.565546       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 09:35:42.591831       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:35:42.591881       1 server_linux.go:132] "Using iptables Proxier"
	I1029 09:35:42.596151       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 09:35:42.596741       1 server.go:527] "Version info" version="v1.34.1"
	I1029 09:35:42.596760       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:35:42.597861       1 config.go:200] "Starting service config controller"
	I1029 09:35:42.597880       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 09:35:42.602034       1 config.go:106] "Starting endpoint slice config controller"
	I1029 09:35:42.602056       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 09:35:42.602075       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 09:35:42.602082       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 09:35:42.602483       1 config.go:309] "Starting node config controller"
	I1029 09:35:42.602500       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 09:35:42.602507       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 09:35:42.698252       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 09:35:42.702577       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 09:35:42.702594       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [de2e4bbcf70fb4e2b145da2e1eeeb3965129da682b985d428d7db1b5c139f9ac] <==
	I1029 09:35:39.289694       1 serving.go:386] Generated self-signed cert in-memory
	W1029 09:35:41.032553       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1029 09:35:41.032582       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1029 09:35:41.032592       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1029 09:35:41.032599       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1029 09:35:41.275745       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1029 09:35:41.275776       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:35:41.283560       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1029 09:35:41.284508       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:35:41.292819       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:35:41.284529       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1029 09:35:41.395387       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 09:35:45 no-preload-505993 kubelet[769]: I1029 09:35:45.249086     769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxbg6\" (UniqueName: \"kubernetes.io/projected/af3605e2-60e5-49fb-9b85-109d52e037a5-kube-api-access-bxbg6\") pod \"kubernetes-dashboard-855c9754f9-985l5\" (UID: \"af3605e2-60e5-49fb-9b85-109d52e037a5\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-985l5"
	Oct 29 09:35:45 no-preload-505993 kubelet[769]: W1029 09:35:45.466822     769 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d63baf692038c6771a74322ac553fcd847fb56f516f845444c48dff9a54d272a/crio-5d3b41bce9d55629374e1a7f8875c385ea95ae964339ce04467cae01aede085b WatchSource:0}: Error finding container 5d3b41bce9d55629374e1a7f8875c385ea95ae964339ce04467cae01aede085b: Status 404 returned error can't find the container with id 5d3b41bce9d55629374e1a7f8875c385ea95ae964339ce04467cae01aede085b
	Oct 29 09:35:45 no-preload-505993 kubelet[769]: W1029 09:35:45.471481     769 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d63baf692038c6771a74322ac553fcd847fb56f516f845444c48dff9a54d272a/crio-9ab79a9bb5be3c15418d1287920ab13e4eb3e1a89cf50289029f058aff23f292 WatchSource:0}: Error finding container 9ab79a9bb5be3c15418d1287920ab13e4eb3e1a89cf50289029f058aff23f292: Status 404 returned error can't find the container with id 9ab79a9bb5be3c15418d1287920ab13e4eb3e1a89cf50289029f058aff23f292
	Oct 29 09:35:48 no-preload-505993 kubelet[769]: I1029 09:35:48.929989     769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 29 09:35:50 no-preload-505993 kubelet[769]: I1029 09:35:50.757768     769 scope.go:117] "RemoveContainer" containerID="d48d75c152158fbcddb817be10dbf9a9d065fd9b291a4965cf18e2ca9c565796"
	Oct 29 09:35:51 no-preload-505993 kubelet[769]: I1029 09:35:51.761622     769 scope.go:117] "RemoveContainer" containerID="d48d75c152158fbcddb817be10dbf9a9d065fd9b291a4965cf18e2ca9c565796"
	Oct 29 09:35:51 no-preload-505993 kubelet[769]: I1029 09:35:51.761900     769 scope.go:117] "RemoveContainer" containerID="b164421cd67b93345e81ad30f3feb6e24d190ed77aff8ef6ed944caa8a28b747"
	Oct 29 09:35:51 no-preload-505993 kubelet[769]: E1029 09:35:51.762034     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-grzt8_kubernetes-dashboard(dd216e78-4d58-4289-97f3-d4160569b000)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-grzt8" podUID="dd216e78-4d58-4289-97f3-d4160569b000"
	Oct 29 09:35:52 no-preload-505993 kubelet[769]: I1029 09:35:52.784753     769 scope.go:117] "RemoveContainer" containerID="b164421cd67b93345e81ad30f3feb6e24d190ed77aff8ef6ed944caa8a28b747"
	Oct 29 09:35:52 no-preload-505993 kubelet[769]: E1029 09:35:52.797890     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-grzt8_kubernetes-dashboard(dd216e78-4d58-4289-97f3-d4160569b000)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-grzt8" podUID="dd216e78-4d58-4289-97f3-d4160569b000"
	Oct 29 09:35:55 no-preload-505993 kubelet[769]: I1029 09:35:55.437585     769 scope.go:117] "RemoveContainer" containerID="b164421cd67b93345e81ad30f3feb6e24d190ed77aff8ef6ed944caa8a28b747"
	Oct 29 09:35:55 no-preload-505993 kubelet[769]: E1029 09:35:55.437756     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-grzt8_kubernetes-dashboard(dd216e78-4d58-4289-97f3-d4160569b000)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-grzt8" podUID="dd216e78-4d58-4289-97f3-d4160569b000"
	Oct 29 09:36:09 no-preload-505993 kubelet[769]: I1029 09:36:09.608912     769 scope.go:117] "RemoveContainer" containerID="b164421cd67b93345e81ad30f3feb6e24d190ed77aff8ef6ed944caa8a28b747"
	Oct 29 09:36:09 no-preload-505993 kubelet[769]: I1029 09:36:09.835937     769 scope.go:117] "RemoveContainer" containerID="b164421cd67b93345e81ad30f3feb6e24d190ed77aff8ef6ed944caa8a28b747"
	Oct 29 09:36:09 no-preload-505993 kubelet[769]: I1029 09:36:09.836264     769 scope.go:117] "RemoveContainer" containerID="0b803e843ffd62b30959b23b502469c6d39c63220ac980f6e8eb563b723110eb"
	Oct 29 09:36:09 no-preload-505993 kubelet[769]: E1029 09:36:09.836443     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-grzt8_kubernetes-dashboard(dd216e78-4d58-4289-97f3-d4160569b000)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-grzt8" podUID="dd216e78-4d58-4289-97f3-d4160569b000"
	Oct 29 09:36:09 no-preload-505993 kubelet[769]: I1029 09:36:09.866295     769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-985l5" podStartSLOduration=13.894118854 podStartE2EDuration="24.866278043s" podCreationTimestamp="2025-10-29 09:35:45 +0000 UTC" firstStartedPulling="2025-10-29 09:35:45.477659888 +0000 UTC m=+10.224318348" lastFinishedPulling="2025-10-29 09:35:56.449819077 +0000 UTC m=+21.196477537" observedRunningTime="2025-10-29 09:35:56.825510364 +0000 UTC m=+21.572168840" watchObservedRunningTime="2025-10-29 09:36:09.866278043 +0000 UTC m=+34.612936520"
	Oct 29 09:36:12 no-preload-505993 kubelet[769]: I1029 09:36:12.847233     769 scope.go:117] "RemoveContainer" containerID="121378d1386ec391bb77c32c4dcdf7ab70266c9bb6c9219a6be7ff9d90b0f763"
	Oct 29 09:36:15 no-preload-505993 kubelet[769]: I1029 09:36:15.437394     769 scope.go:117] "RemoveContainer" containerID="0b803e843ffd62b30959b23b502469c6d39c63220ac980f6e8eb563b723110eb"
	Oct 29 09:36:15 no-preload-505993 kubelet[769]: E1029 09:36:15.438239     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-grzt8_kubernetes-dashboard(dd216e78-4d58-4289-97f3-d4160569b000)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-grzt8" podUID="dd216e78-4d58-4289-97f3-d4160569b000"
	Oct 29 09:36:26 no-preload-505993 kubelet[769]: I1029 09:36:26.609093     769 scope.go:117] "RemoveContainer" containerID="0b803e843ffd62b30959b23b502469c6d39c63220ac980f6e8eb563b723110eb"
	Oct 29 09:36:26 no-preload-505993 kubelet[769]: E1029 09:36:26.609286     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-grzt8_kubernetes-dashboard(dd216e78-4d58-4289-97f3-d4160569b000)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-grzt8" podUID="dd216e78-4d58-4289-97f3-d4160569b000"
	Oct 29 09:36:32 no-preload-505993 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 29 09:36:32 no-preload-505993 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 29 09:36:32 no-preload-505993 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [c16295e57713c6fa7f970f1681964ea937293af9f41f054bac638fc23c2a75e1] <==
	2025/10/29 09:35:56 Using namespace: kubernetes-dashboard
	2025/10/29 09:35:56 Using in-cluster config to connect to apiserver
	2025/10/29 09:35:56 Using secret token for csrf signing
	2025/10/29 09:35:56 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/29 09:35:56 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/29 09:35:56 Successful initial request to the apiserver, version: v1.34.1
	2025/10/29 09:35:56 Generating JWE encryption key
	2025/10/29 09:35:56 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/29 09:35:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/29 09:35:56 Initializing JWE encryption key from synchronized object
	2025/10/29 09:35:57 Creating in-cluster Sidecar client
	2025/10/29 09:35:57 Serving insecurely on HTTP port: 9090
	2025/10/29 09:35:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/29 09:36:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/29 09:35:56 Starting overwatch
	
	
	==> storage-provisioner [121378d1386ec391bb77c32c4dcdf7ab70266c9bb6c9219a6be7ff9d90b0f763] <==
	I1029 09:35:41.913699       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1029 09:36:11.921317       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [6c858fe9343d518a3734dba79545bb4f9be5a1caad65608525b2cfbdc1cb354e] <==
	I1029 09:36:12.940287       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1029 09:36:12.962311       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1029 09:36:12.962373       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1029 09:36:12.968600       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:16.424794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:20.685567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:24.284568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:27.338522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:30.360247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:30.367471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:36:30.367663       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1029 09:36:30.367879       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-505993_fca96da2-5e48-4fa9-a734-07ed5f520709!
	W1029 09:36:30.377961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:36:30.368082       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cbaa88f5-db56-42e2-b30d-ab8c0d14deb0", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-505993_fca96da2-5e48-4fa9-a734-07ed5f520709 became leader
	W1029 09:36:30.380933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:36:30.468516       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-505993_fca96da2-5e48-4fa9-a734-07ed5f520709!
	W1029 09:36:32.387362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:32.399210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:34.403174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:34.408488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:36.411515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:36.418907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-505993 -n no-preload-505993
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-505993 -n no-preload-505993: exit status 2 (359.915015ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-505993 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-946178 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-946178 --alsologtostderr -v=1: exit status 80 (2.099174347s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-946178 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 09:36:51.799934  204119 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:36:51.800072  204119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:36:51.800085  204119 out.go:374] Setting ErrFile to fd 2...
	I1029 09:36:51.800114  204119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:36:51.800505  204119 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 09:36:51.800812  204119 out.go:368] Setting JSON to false
	I1029 09:36:51.800840  204119 mustload.go:66] Loading cluster: embed-certs-946178
	I1029 09:36:51.801225  204119 config.go:182] Loaded profile config "embed-certs-946178": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:36:51.801691  204119 cli_runner.go:164] Run: docker container inspect embed-certs-946178 --format={{.State.Status}}
	I1029 09:36:51.817810  204119 host.go:66] Checking if "embed-certs-946178" exists ...
	I1029 09:36:51.818151  204119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:36:51.881840  204119 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-29 09:36:51.872668743 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:36:51.882536  204119 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-946178 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1029 09:36:51.886127  204119 out.go:179] * Pausing node embed-certs-946178 ... 
	I1029 09:36:51.889134  204119 host.go:66] Checking if "embed-certs-946178" exists ...
	I1029 09:36:51.889481  204119 ssh_runner.go:195] Run: systemctl --version
	I1029 09:36:51.889534  204119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-946178
	I1029 09:36:51.906383  204119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/embed-certs-946178/id_rsa Username:docker}
	I1029 09:36:52.013946  204119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:36:52.038955  204119 pause.go:52] kubelet running: true
	I1029 09:36:52.039016  204119 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:36:52.319434  204119 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:36:52.319523  204119 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:36:52.418357  204119 cri.go:89] found id: "92f979d951b10a84254b437e918e99e627d64ede9c787c51501596b6a7d466f7"
	I1029 09:36:52.418384  204119 cri.go:89] found id: "36a804c0c5629e7001b18516f7faeb77607a1ec446a9dc1dfbac911a500eed0a"
	I1029 09:36:52.418390  204119 cri.go:89] found id: "07d7e42fa96175c53a244265ef556c75d7caea96ca747163e76e54182722faa4"
	I1029 09:36:52.418395  204119 cri.go:89] found id: "c5f1422765f907e545795e959e1a1fd7204c59fb9f789c52d7bf772991a37142"
	I1029 09:36:52.418398  204119 cri.go:89] found id: "a923c0ed0d9882028afa7a7955c093bae07f06294b087c3eb1720d7f340d0274"
	I1029 09:36:52.418402  204119 cri.go:89] found id: "8fb3490c8a2c3fa9b6f908aac7af524a8a6b713d4b1306444595caf0ed320c15"
	I1029 09:36:52.418405  204119 cri.go:89] found id: "1eca250e7dd68ca1de609c5e6810695c68eaea3b51a86f93331e6d7205acad68"
	I1029 09:36:52.418408  204119 cri.go:89] found id: "0d84906ed693bbd1f66a0d46ac97dbb716c04201acaa1b9f85ffecdd60d49365"
	I1029 09:36:52.418412  204119 cri.go:89] found id: "9ba572ee5a49b071c9887b1b7536d698adcfa4c4fe872393a5200107f89ce91a"
	I1029 09:36:52.418417  204119 cri.go:89] found id: "e2218004159ca55db5cf8be4df666cd00910c74fa38f64de4c5b3bf57a9c52d1"
	I1029 09:36:52.418421  204119 cri.go:89] found id: "996e12d138170502140405ed35ffef95cddee211344908ec4df83911094c14ec"
	I1029 09:36:52.418424  204119 cri.go:89] found id: ""
	I1029 09:36:52.418473  204119 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:36:52.432799  204119 retry.go:31] will retry after 141.654732ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:36:52Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:36:52.575126  204119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:36:52.593060  204119 pause.go:52] kubelet running: false
	I1029 09:36:52.593136  204119 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:36:52.822326  204119 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:36:52.822457  204119 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:36:52.906206  204119 cri.go:89] found id: "92f979d951b10a84254b437e918e99e627d64ede9c787c51501596b6a7d466f7"
	I1029 09:36:52.906234  204119 cri.go:89] found id: "36a804c0c5629e7001b18516f7faeb77607a1ec446a9dc1dfbac911a500eed0a"
	I1029 09:36:52.906240  204119 cri.go:89] found id: "07d7e42fa96175c53a244265ef556c75d7caea96ca747163e76e54182722faa4"
	I1029 09:36:52.906244  204119 cri.go:89] found id: "c5f1422765f907e545795e959e1a1fd7204c59fb9f789c52d7bf772991a37142"
	I1029 09:36:52.906247  204119 cri.go:89] found id: "a923c0ed0d9882028afa7a7955c093bae07f06294b087c3eb1720d7f340d0274"
	I1029 09:36:52.906251  204119 cri.go:89] found id: "8fb3490c8a2c3fa9b6f908aac7af524a8a6b713d4b1306444595caf0ed320c15"
	I1029 09:36:52.906254  204119 cri.go:89] found id: "1eca250e7dd68ca1de609c5e6810695c68eaea3b51a86f93331e6d7205acad68"
	I1029 09:36:52.906258  204119 cri.go:89] found id: "0d84906ed693bbd1f66a0d46ac97dbb716c04201acaa1b9f85ffecdd60d49365"
	I1029 09:36:52.906262  204119 cri.go:89] found id: "9ba572ee5a49b071c9887b1b7536d698adcfa4c4fe872393a5200107f89ce91a"
	I1029 09:36:52.906268  204119 cri.go:89] found id: "e2218004159ca55db5cf8be4df666cd00910c74fa38f64de4c5b3bf57a9c52d1"
	I1029 09:36:52.906272  204119 cri.go:89] found id: "996e12d138170502140405ed35ffef95cddee211344908ec4df83911094c14ec"
	I1029 09:36:52.906275  204119 cri.go:89] found id: ""
	I1029 09:36:52.906321  204119 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:36:52.919302  204119 retry.go:31] will retry after 520.153774ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:36:52Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:36:53.439833  204119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:36:53.456242  204119 pause.go:52] kubelet running: false
	I1029 09:36:53.456298  204119 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:36:53.701715  204119 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:36:53.701798  204119 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:36:53.804963  204119 cri.go:89] found id: "92f979d951b10a84254b437e918e99e627d64ede9c787c51501596b6a7d466f7"
	I1029 09:36:53.804984  204119 cri.go:89] found id: "36a804c0c5629e7001b18516f7faeb77607a1ec446a9dc1dfbac911a500eed0a"
	I1029 09:36:53.804989  204119 cri.go:89] found id: "07d7e42fa96175c53a244265ef556c75d7caea96ca747163e76e54182722faa4"
	I1029 09:36:53.804993  204119 cri.go:89] found id: "c5f1422765f907e545795e959e1a1fd7204c59fb9f789c52d7bf772991a37142"
	I1029 09:36:53.804996  204119 cri.go:89] found id: "a923c0ed0d9882028afa7a7955c093bae07f06294b087c3eb1720d7f340d0274"
	I1029 09:36:53.805001  204119 cri.go:89] found id: "8fb3490c8a2c3fa9b6f908aac7af524a8a6b713d4b1306444595caf0ed320c15"
	I1029 09:36:53.805004  204119 cri.go:89] found id: "1eca250e7dd68ca1de609c5e6810695c68eaea3b51a86f93331e6d7205acad68"
	I1029 09:36:53.805007  204119 cri.go:89] found id: "0d84906ed693bbd1f66a0d46ac97dbb716c04201acaa1b9f85ffecdd60d49365"
	I1029 09:36:53.805011  204119 cri.go:89] found id: "9ba572ee5a49b071c9887b1b7536d698adcfa4c4fe872393a5200107f89ce91a"
	I1029 09:36:53.805026  204119 cri.go:89] found id: "e2218004159ca55db5cf8be4df666cd00910c74fa38f64de4c5b3bf57a9c52d1"
	I1029 09:36:53.805030  204119 cri.go:89] found id: "996e12d138170502140405ed35ffef95cddee211344908ec4df83911094c14ec"
	I1029 09:36:53.805033  204119 cri.go:89] found id: ""
	I1029 09:36:53.805083  204119 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:36:53.828226  204119 out.go:203] 
	W1029 09:36:53.831388  204119 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:36:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:36:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 09:36:53.831412  204119 out.go:285] * 
	* 
	W1029 09:36:53.837045  204119 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 09:36:53.839996  204119 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-946178 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-946178
helpers_test.go:243: (dbg) docker inspect embed-certs-946178:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b005fccf23a74438eb05b59784e74cfa3bc0ac4e930e3c2493cdca7d2b239691",
	        "Created": "2025-10-29T09:34:04.151290839Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 199214,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:35:48.64884844Z",
	            "FinishedAt": "2025-10-29T09:35:47.485114596Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/b005fccf23a74438eb05b59784e74cfa3bc0ac4e930e3c2493cdca7d2b239691/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b005fccf23a74438eb05b59784e74cfa3bc0ac4e930e3c2493cdca7d2b239691/hostname",
	        "HostsPath": "/var/lib/docker/containers/b005fccf23a74438eb05b59784e74cfa3bc0ac4e930e3c2493cdca7d2b239691/hosts",
	        "LogPath": "/var/lib/docker/containers/b005fccf23a74438eb05b59784e74cfa3bc0ac4e930e3c2493cdca7d2b239691/b005fccf23a74438eb05b59784e74cfa3bc0ac4e930e3c2493cdca7d2b239691-json.log",
	        "Name": "/embed-certs-946178",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-946178:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-946178",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b005fccf23a74438eb05b59784e74cfa3bc0ac4e930e3c2493cdca7d2b239691",
	                "LowerDir": "/var/lib/docker/overlay2/0e4b8a36d03e2aa5ecd176b333f544932579c1dad010690bf16775b13c5b7cee-init/diff:/var/lib/docker/overlay2/512c003c31c6c889d7180677d0a7d04f9641d651bb7c0f829b95ca7e47c2836c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0e4b8a36d03e2aa5ecd176b333f544932579c1dad010690bf16775b13c5b7cee/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0e4b8a36d03e2aa5ecd176b333f544932579c1dad010690bf16775b13c5b7cee/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0e4b8a36d03e2aa5ecd176b333f544932579c1dad010690bf16775b13c5b7cee/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-946178",
	                "Source": "/var/lib/docker/volumes/embed-certs-946178/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-946178",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-946178",
	                "name.minikube.sigs.k8s.io": "embed-certs-946178",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9e6b111990b4d7fb35c76796e66715d64efdd480f5d2f5bb1562cfe3843e4566",
	            "SandboxKey": "/var/run/docker/netns/9e6b111990b4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-946178": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:04:0f:be:74:e8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "58e14a6bd5919ac00c4f79c5de1533110411df785cd7d398ccc05d5f98f62442",
	                    "EndpointID": "20bd8bbc13e2a98c0c1ae2f30ed4850f24093b2809646ac40bd1b95880e8b38c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-946178",
	                        "b005fccf23a7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-946178 -n embed-certs-946178
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-946178 -n embed-certs-946178: exit status 2 (439.748072ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-946178 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-946178 logs -n 25: (1.611908324s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-162751 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-162751       │ jenkins │ v1.37.0 │ 29 Oct 25 09:32 UTC │ 29 Oct 25 09:33 UTC │
	│ start   │ -p cert-expiration-690444 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-690444       │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ image   │ old-k8s-version-162751 image list --format=json                                                                                                                                                                                               │ old-k8s-version-162751       │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ pause   │ -p old-k8s-version-162751 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-162751       │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │                     │
	│ delete  │ -p old-k8s-version-162751                                                                                                                                                                                                                     │ old-k8s-version-162751       │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ delete  │ -p old-k8s-version-162751                                                                                                                                                                                                                     │ old-k8s-version-162751       │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ start   │ -p no-preload-505993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:35 UTC │
	│ delete  │ -p cert-expiration-690444                                                                                                                                                                                                                     │ cert-expiration-690444       │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ start   │ -p embed-certs-946178 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:35 UTC │
	│ addons  │ enable metrics-server -p no-preload-505993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │                     │
	│ stop    │ -p no-preload-505993 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:35 UTC │
	│ addons  │ enable dashboard -p no-preload-505993 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:35 UTC │
	│ start   │ -p no-preload-505993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-946178 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │                     │
	│ stop    │ -p embed-certs-946178 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-946178 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:35 UTC │
	│ start   │ -p embed-certs-946178 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:36 UTC │
	│ image   │ no-preload-505993 image list --format=json                                                                                                                                                                                                    │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ pause   │ -p no-preload-505993 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │                     │
	│ delete  │ -p no-preload-505993                                                                                                                                                                                                                          │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ delete  │ -p no-preload-505993                                                                                                                                                                                                                          │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ delete  │ -p disable-driver-mounts-012564                                                                                                                                                                                                               │ disable-driver-mounts-012564 │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ start   │ -p default-k8s-diff-port-154565 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-154565 │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │                     │
	│ image   │ embed-certs-946178 image list --format=json                                                                                                                                                                                                   │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ pause   │ -p embed-certs-946178 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:36:42
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:36:42.181714  202937 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:36:42.182940  202937 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:36:42.182957  202937 out.go:374] Setting ErrFile to fd 2...
	I1029 09:36:42.182963  202937 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:36:42.183416  202937 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 09:36:42.184024  202937 out.go:368] Setting JSON to false
	I1029 09:36:42.185098  202937 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4754,"bootTime":1761725848,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1029 09:36:42.185194  202937 start.go:143] virtualization:  
	I1029 09:36:42.189511  202937 out.go:179] * [default-k8s-diff-port-154565] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1029 09:36:42.194065  202937 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:36:42.194253  202937 notify.go:221] Checking for updates...
	I1029 09:36:42.200844  202937 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:36:42.204237  202937 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:36:42.207538  202937 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	I1029 09:36:42.211047  202937 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1029 09:36:42.214224  202937 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:36:42.218247  202937 config.go:182] Loaded profile config "embed-certs-946178": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:36:42.218453  202937 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:36:42.255302  202937 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1029 09:36:42.255467  202937 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:36:42.336779  202937 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-29 09:36:42.326745732 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:36:42.336893  202937 docker.go:319] overlay module found
	I1029 09:36:42.340161  202937 out.go:179] * Using the docker driver based on user configuration
	I1029 09:36:42.343132  202937 start.go:309] selected driver: docker
	I1029 09:36:42.343153  202937 start.go:930] validating driver "docker" against <nil>
	I1029 09:36:42.343166  202937 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:36:42.343929  202937 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:36:42.406824  202937 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-29 09:36:42.396898887 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:36:42.406980  202937 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1029 09:36:42.407214  202937 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:36:42.410332  202937 out.go:179] * Using Docker driver with root privileges
	I1029 09:36:42.413368  202937 cni.go:84] Creating CNI manager for ""
	I1029 09:36:42.413440  202937 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:36:42.413453  202937 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1029 09:36:42.413540  202937 start.go:353] cluster config:
	{Name:default-k8s-diff-port-154565 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-154565 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:36:42.418572  202937 out.go:179] * Starting "default-k8s-diff-port-154565" primary control-plane node in "default-k8s-diff-port-154565" cluster
	I1029 09:36:42.421347  202937 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:36:42.424675  202937 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:36:42.427462  202937 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:36:42.427490  202937 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:36:42.427522  202937 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1029 09:36:42.427531  202937 cache.go:59] Caching tarball of preloaded images
	I1029 09:36:42.427611  202937 preload.go:233] Found /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1029 09:36:42.427621  202937 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 09:36:42.427726  202937 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/config.json ...
	I1029 09:36:42.427742  202937 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/config.json: {Name:mkfbd6d6bbc51eb8a3e524e494228ae4772192a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:36:42.448770  202937 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:36:42.448795  202937 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:36:42.448810  202937 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:36:42.448879  202937 start.go:360] acquireMachinesLock for default-k8s-diff-port-154565: {Name:mk949f3a944b6d0d5624c677fdcfbf59ea2f05b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:36:42.449027  202937 start.go:364] duration metric: took 124.26µs to acquireMachinesLock for "default-k8s-diff-port-154565"
	I1029 09:36:42.449058  202937 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-154565 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-154565 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:36:42.449145  202937 start.go:125] createHost starting for "" (driver="docker")
	I1029 09:36:42.454489  202937 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1029 09:36:42.454734  202937 start.go:159] libmachine.API.Create for "default-k8s-diff-port-154565" (driver="docker")
	I1029 09:36:42.454781  202937 client.go:173] LocalClient.Create starting
	I1029 09:36:42.454864  202937 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem
	I1029 09:36:42.454904  202937 main.go:143] libmachine: Decoding PEM data...
	I1029 09:36:42.454918  202937 main.go:143] libmachine: Parsing certificate...
	I1029 09:36:42.454980  202937 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem
	I1029 09:36:42.455006  202937 main.go:143] libmachine: Decoding PEM data...
	I1029 09:36:42.455017  202937 main.go:143] libmachine: Parsing certificate...
	I1029 09:36:42.455391  202937 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-154565 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1029 09:36:42.471681  202937 cli_runner.go:211] docker network inspect default-k8s-diff-port-154565 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1029 09:36:42.471778  202937 network_create.go:284] running [docker network inspect default-k8s-diff-port-154565] to gather additional debugging logs...
	I1029 09:36:42.471800  202937 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-154565
	W1029 09:36:42.486741  202937 cli_runner.go:211] docker network inspect default-k8s-diff-port-154565 returned with exit code 1
	I1029 09:36:42.486783  202937 network_create.go:287] error running [docker network inspect default-k8s-diff-port-154565]: docker network inspect default-k8s-diff-port-154565: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-154565 not found
	I1029 09:36:42.486798  202937 network_create.go:289] output of [docker network inspect default-k8s-diff-port-154565]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-154565 not found
	
	** /stderr **
	I1029 09:36:42.486904  202937 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:36:42.503611  202937 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0687088684ea IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8e:e2:78:39:db:9c} reservation:<nil>}
	I1029 09:36:42.503936  202937 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b2a2304196dd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8e:c9:a9:e0:d0:7a} reservation:<nil>}
	I1029 09:36:42.504260  202937 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e863a0178057 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:86:09:fc:5e:55} reservation:<nil>}
	I1029 09:36:42.504881  202937 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e3130}
	I1029 09:36:42.504919  202937 network_create.go:124] attempt to create docker network default-k8s-diff-port-154565 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1029 09:36:42.504986  202937 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-154565 default-k8s-diff-port-154565
	I1029 09:36:42.590299  202937 network_create.go:108] docker network default-k8s-diff-port-154565 192.168.76.0/24 created
	I1029 09:36:42.590336  202937 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-154565" container
	I1029 09:36:42.590420  202937 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1029 09:36:42.607607  202937 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-154565 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-154565 --label created_by.minikube.sigs.k8s.io=true
	I1029 09:36:42.631316  202937 oci.go:103] Successfully created a docker volume default-k8s-diff-port-154565
	I1029 09:36:42.631437  202937 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-154565-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-154565 --entrypoint /usr/bin/test -v default-k8s-diff-port-154565:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1029 09:36:43.268151  202937 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-154565
	I1029 09:36:43.268212  202937 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:36:43.268233  202937 kic.go:194] Starting extracting preloaded images to volume ...
	I1029 09:36:43.268416  202937 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-154565:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1029 09:36:47.806554  202937 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-154565:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.538091105s)
	I1029 09:36:47.806589  202937 kic.go:203] duration metric: took 4.538352268s to extract preloaded images to volume ...
	W1029 09:36:47.806745  202937 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1029 09:36:47.806860  202937 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1029 09:36:47.865368  202937 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-154565 --name default-k8s-diff-port-154565 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-154565 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-154565 --network default-k8s-diff-port-154565 --ip 192.168.76.2 --volume default-k8s-diff-port-154565:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1029 09:36:48.214090  202937 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-154565 --format={{.State.Running}}
	I1029 09:36:48.236080  202937 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-154565 --format={{.State.Status}}
	I1029 09:36:48.264641  202937 cli_runner.go:164] Run: docker exec default-k8s-diff-port-154565 stat /var/lib/dpkg/alternatives/iptables
	I1029 09:36:48.326570  202937 oci.go:144] the created container "default-k8s-diff-port-154565" has a running status.
	I1029 09:36:48.326596  202937 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/default-k8s-diff-port-154565/id_rsa...
	I1029 09:36:48.753134  202937 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21800-2763/.minikube/machines/default-k8s-diff-port-154565/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1029 09:36:48.781384  202937 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-154565 --format={{.State.Status}}
	I1029 09:36:48.805839  202937 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1029 09:36:48.805857  202937 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-154565 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1029 09:36:48.862790  202937 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-154565 --format={{.State.Status}}
	I1029 09:36:48.891646  202937 machine.go:94] provisionDockerMachine start ...
	I1029 09:36:48.891747  202937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:36:48.920666  202937 main.go:143] libmachine: Using SSH client type: native
	I1029 09:36:48.921001  202937 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1029 09:36:48.921011  202937 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 09:36:48.923755  202937 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60746->127.0.0.1:33073: read: connection reset by peer
	I1029 09:36:52.080270  202937 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-154565
	
	I1029 09:36:52.080292  202937 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-154565"
	I1029 09:36:52.080412  202937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:36:52.114120  202937 main.go:143] libmachine: Using SSH client type: native
	I1029 09:36:52.114422  202937 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1029 09:36:52.114439  202937 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-154565 && echo "default-k8s-diff-port-154565" | sudo tee /etc/hostname
	
	
	==> CRI-O <==
	Oct 29 09:36:42 embed-certs-946178 crio[653]: time="2025-10-29T09:36:42.520888909Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8d1d1d71-ef94-4df5-95ad-41aa466330a7 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:36:42 embed-certs-946178 crio[653]: time="2025-10-29T09:36:42.522686521Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=aa951b0f-f1d0-4e2f-9284-4ba488fa4fcd name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:36:42 embed-certs-946178 crio[653]: time="2025-10-29T09:36:42.525601279Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r7gkw/dashboard-metrics-scraper" id=519a8571-0e8c-4d06-b67b-e84ac75da46f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:36:42 embed-certs-946178 crio[653]: time="2025-10-29T09:36:42.525720049Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:36:42 embed-certs-946178 crio[653]: time="2025-10-29T09:36:42.543958028Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:36:42 embed-certs-946178 crio[653]: time="2025-10-29T09:36:42.545061217Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:36:42 embed-certs-946178 crio[653]: time="2025-10-29T09:36:42.566631427Z" level=info msg="Created container e2218004159ca55db5cf8be4df666cd00910c74fa38f64de4c5b3bf57a9c52d1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r7gkw/dashboard-metrics-scraper" id=519a8571-0e8c-4d06-b67b-e84ac75da46f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:36:42 embed-certs-946178 crio[653]: time="2025-10-29T09:36:42.568213751Z" level=info msg="Starting container: e2218004159ca55db5cf8be4df666cd00910c74fa38f64de4c5b3bf57a9c52d1" id=509a6c10-707b-4219-a34c-af3f0cb468fb name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:36:42 embed-certs-946178 crio[653]: time="2025-10-29T09:36:42.570045463Z" level=info msg="Started container" PID=1679 containerID=e2218004159ca55db5cf8be4df666cd00910c74fa38f64de4c5b3bf57a9c52d1 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r7gkw/dashboard-metrics-scraper id=509a6c10-707b-4219-a34c-af3f0cb468fb name=/runtime.v1.RuntimeService/StartContainer sandboxID=7b85e3ad306ad386981ab94a6d2cfb67f8222eeb76aacdcf16ca27309c96dfe5
	Oct 29 09:36:42 embed-certs-946178 conmon[1677]: conmon e2218004159ca55db5cf <ninfo>: container 1679 exited with status 1
	Oct 29 09:36:42 embed-certs-946178 crio[653]: time="2025-10-29T09:36:42.739058728Z" level=info msg="Removing container: cad9e36a641d17e73d7cc7201f5f4dda3e85d700e57ff08424a8eb5b89f05dbc" id=154cbc07-5ec5-49ea-9ada-b23e8397b9d8 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:36:42 embed-certs-946178 crio[653]: time="2025-10-29T09:36:42.750713937Z" level=info msg="Error loading conmon cgroup of container cad9e36a641d17e73d7cc7201f5f4dda3e85d700e57ff08424a8eb5b89f05dbc: cgroup deleted" id=154cbc07-5ec5-49ea-9ada-b23e8397b9d8 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:36:42 embed-certs-946178 crio[653]: time="2025-10-29T09:36:42.75438679Z" level=info msg="Removed container cad9e36a641d17e73d7cc7201f5f4dda3e85d700e57ff08424a8eb5b89f05dbc: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r7gkw/dashboard-metrics-scraper" id=154cbc07-5ec5-49ea-9ada-b23e8397b9d8 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:36:44 embed-certs-946178 crio[653]: time="2025-10-29T09:36:44.460003166Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:36:44 embed-certs-946178 crio[653]: time="2025-10-29T09:36:44.464431603Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:36:44 embed-certs-946178 crio[653]: time="2025-10-29T09:36:44.464590021Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:36:44 embed-certs-946178 crio[653]: time="2025-10-29T09:36:44.46467762Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:36:44 embed-certs-946178 crio[653]: time="2025-10-29T09:36:44.468768766Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:36:44 embed-certs-946178 crio[653]: time="2025-10-29T09:36:44.468921014Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:36:44 embed-certs-946178 crio[653]: time="2025-10-29T09:36:44.469003049Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:36:44 embed-certs-946178 crio[653]: time="2025-10-29T09:36:44.478149239Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:36:44 embed-certs-946178 crio[653]: time="2025-10-29T09:36:44.478321245Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:36:44 embed-certs-946178 crio[653]: time="2025-10-29T09:36:44.478398808Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:36:44 embed-certs-946178 crio[653]: time="2025-10-29T09:36:44.481778769Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:36:44 embed-certs-946178 crio[653]: time="2025-10-29T09:36:44.481954836Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	e2218004159ca       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           12 seconds ago      Exited              dashboard-metrics-scraper   2                   7b85e3ad306ad       dashboard-metrics-scraper-6ffb444bf9-r7gkw   kubernetes-dashboard
	92f979d951b10       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago      Running             storage-provisioner         2                   f9c2ddf68457f       storage-provisioner                          kube-system
	996e12d138170       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   40 seconds ago      Running             kubernetes-dashboard        0                   9fc75eaaddb4b       kubernetes-dashboard-855c9754f9-9fqk4        kubernetes-dashboard
	36a804c0c5629       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           51 seconds ago      Running             coredns                     1                   73cfdbf2315df       coredns-66bc5c9577-fszff                     kube-system
	6a6dfee6bac4f       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   d2357829908e2       busybox                                      default
	07d7e42fa9617       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago      Running             kindnet-cni                 1                   2db4187e4229c       kindnet-8lf6r                                kube-system
	c5f1422765f90       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           51 seconds ago      Running             kube-proxy                  1                   4347fb9bc9d13       kube-proxy-8zwf2                             kube-system
	a923c0ed0d988       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago      Exited              storage-provisioner         1                   f9c2ddf68457f       storage-provisioner                          kube-system
	8fb3490c8a2c3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           57 seconds ago      Running             kube-scheduler              1                   0c6d059bc581a       kube-scheduler-embed-certs-946178            kube-system
	1eca250e7dd68       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           57 seconds ago      Running             kube-apiserver              1                   e6c34750dd04d       kube-apiserver-embed-certs-946178            kube-system
	0d84906ed693b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           57 seconds ago      Running             kube-controller-manager     1                   f31d0d1b68b7b       kube-controller-manager-embed-certs-946178   kube-system
	9ba572ee5a49b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           57 seconds ago      Running             etcd                        1                   f55d42a044c62       etcd-embed-certs-946178                      kube-system
	
	
	==> coredns [36a804c0c5629e7001b18516f7faeb77607a1ec446a9dc1dfbac911a500eed0a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55051 - 32422 "HINFO IN 7857672369422735980.4433715007118956098. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02404031s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-946178
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-946178
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=embed-certs-946178
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_34_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:34:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-946178
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:36:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:36:33 +0000   Wed, 29 Oct 2025 09:34:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:36:33 +0000   Wed, 29 Oct 2025 09:34:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:36:33 +0000   Wed, 29 Oct 2025 09:34:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 09:36:33 +0000   Wed, 29 Oct 2025 09:35:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-946178
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                3602b941-fa8a-4d9a-9349-a96421b2f60b
	  Boot ID:                    dcb5a4aa-84b0-4edf-aeb7-a96ccf6c0882
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-fszff                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m17s
	  kube-system                 etcd-embed-certs-946178                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m21s
	  kube-system                 kindnet-8lf6r                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m17s
	  kube-system                 kube-apiserver-embed-certs-946178             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-controller-manager-embed-certs-946178    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-proxy-8zwf2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-scheduler-embed-certs-946178             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-r7gkw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-9fqk4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m15s                  kube-proxy       
	  Normal   Starting                 50s                    kube-proxy       
	  Normal   Starting                 2m32s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m32s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m32s (x8 over 2m32s)  kubelet          Node embed-certs-946178 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m32s (x8 over 2m32s)  kubelet          Node embed-certs-946178 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m32s (x8 over 2m32s)  kubelet          Node embed-certs-946178 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m22s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m22s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m21s                  kubelet          Node embed-certs-946178 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m21s                  kubelet          Node embed-certs-946178 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m21s                  kubelet          Node embed-certs-946178 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m18s                  node-controller  Node embed-certs-946178 event: Registered Node embed-certs-946178 in Controller
	  Normal   NodeReady                95s                    kubelet          Node embed-certs-946178 status is now: NodeReady
	  Normal   Starting                 58s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 58s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  58s (x8 over 58s)      kubelet          Node embed-certs-946178 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    58s (x8 over 58s)      kubelet          Node embed-certs-946178 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     58s (x8 over 58s)      kubelet          Node embed-certs-946178 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           49s                    node-controller  Node embed-certs-946178 event: Registered Node embed-certs-946178 in Controller
	
	
	==> dmesg <==
	[Oct29 09:08] overlayfs: idmapped layers are currently not supported
	[Oct29 09:10] overlayfs: idmapped layers are currently not supported
	[ +24.018500] overlayfs: idmapped layers are currently not supported
	[  +4.070732] overlayfs: idmapped layers are currently not supported
	[Oct29 09:11] overlayfs: idmapped layers are currently not supported
	[ +18.424492] overlayfs: idmapped layers are currently not supported
	[  +4.342269] hrtimer: interrupt took 2289025 ns
	[Oct29 09:12] overlayfs: idmapped layers are currently not supported
	[Oct29 09:13] overlayfs: idmapped layers are currently not supported
	[Oct29 09:14] overlayfs: idmapped layers are currently not supported
	[Oct29 09:20] overlayfs: idmapped layers are currently not supported
	[Oct29 09:23] overlayfs: idmapped layers are currently not supported
	[Oct29 09:24] overlayfs: idmapped layers are currently not supported
	[ +30.917844] overlayfs: idmapped layers are currently not supported
	[Oct29 09:27] overlayfs: idmapped layers are currently not supported
	[Oct29 09:29] overlayfs: idmapped layers are currently not supported
	[Oct29 09:30] overlayfs: idmapped layers are currently not supported
	[  +5.608805] overlayfs: idmapped layers are currently not supported
	[ +37.422429] overlayfs: idmapped layers are currently not supported
	[Oct29 09:31] overlayfs: idmapped layers are currently not supported
	[Oct29 09:32] overlayfs: idmapped layers are currently not supported
	[Oct29 09:34] overlayfs: idmapped layers are currently not supported
	[ +22.728709] overlayfs: idmapped layers are currently not supported
	[Oct29 09:35] overlayfs: idmapped layers are currently not supported
	[ +21.902387] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9ba572ee5a49b071c9887b1b7536d698adcfa4c4fe872393a5200107f89ce91a] <==
	{"level":"warn","ts":"2025-10-29T09:36:01.603088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:01.641060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:01.653156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:01.694279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:01.710546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:01.723941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:01.746797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:01.765569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:01.806242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:01.834024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:01.845313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:01.882857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:01.884779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:01.905933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:01.946314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:01.963285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:01.982840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:02.004622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:02.019574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:02.036203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:02.061288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:02.098286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:02.118222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:02.137279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:02.244898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38066","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:36:55 up  1:19,  0 user,  load average: 3.19, 3.62, 2.86
	Linux embed-certs-946178 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [07d7e42fa96175c53a244265ef556c75d7caea96ca747163e76e54182722faa4] <==
	I1029 09:36:04.275085       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:36:04.275589       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1029 09:36:04.275776       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:36:04.275819       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:36:04.275860       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:36:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:36:04.459580       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:36:04.459599       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:36:04.459608       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:36:04.459890       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1029 09:36:34.459550       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1029 09:36:34.459807       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1029 09:36:34.459910       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1029 09:36:34.464395       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1029 09:36:35.759861       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 09:36:35.759891       1 metrics.go:72] Registering metrics
	I1029 09:36:35.759961       1 controller.go:711] "Syncing nftables rules"
	I1029 09:36:44.459732       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1029 09:36:44.459786       1 main.go:301] handling current node
	I1029 09:36:54.464424       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1029 09:36:54.464465       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1eca250e7dd68ca1de609c5e6810695c68eaea3b51a86f93331e6d7205acad68] <==
	I1029 09:36:03.249686       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1029 09:36:03.249727       1 policy_source.go:240] refreshing policies
	I1029 09:36:03.252420       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1029 09:36:03.252474       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1029 09:36:03.253417       1 aggregator.go:171] initial CRD sync complete...
	I1029 09:36:03.253447       1 autoregister_controller.go:144] Starting autoregister controller
	I1029 09:36:03.253456       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1029 09:36:03.253463       1 cache.go:39] Caches are synced for autoregister controller
	I1029 09:36:03.264703       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1029 09:36:03.324573       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1029 09:36:03.324896       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1029 09:36:03.337810       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1029 09:36:03.385599       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1029 09:36:03.571812       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 09:36:03.827262       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:36:04.390974       1 controller.go:667] quota admission added evaluator for: namespaces
	I1029 09:36:04.448428       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1029 09:36:04.490532       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:36:04.505627       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1029 09:36:04.698900       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1029 09:36:04.700259       1 controller.go:667] quota admission added evaluator for: endpoints
	I1029 09:36:04.721274       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.22.15"}
	I1029 09:36:04.727034       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1029 09:36:04.773472       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.186.6"}
	I1029 09:36:06.846384       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [0d84906ed693bbd1f66a0d46ac97dbb716c04201acaa1b9f85ffecdd60d49365] <==
	I1029 09:36:06.619748       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1029 09:36:06.619892       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1029 09:36:06.619920       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1029 09:36:06.632782       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1029 09:36:06.632804       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1029 09:36:06.632815       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1029 09:36:06.637114       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1029 09:36:06.641930       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1029 09:36:06.642935       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1029 09:36:06.643283       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1029 09:36:06.643654       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1029 09:36:06.643720       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1029 09:36:06.643793       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1029 09:36:06.643837       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1029 09:36:06.644018       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1029 09:36:06.644106       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1029 09:36:06.644448       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1029 09:36:06.646103       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1029 09:36:06.647650       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1029 09:36:06.647762       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:36:06.652678       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:36:06.656035       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1029 09:36:06.682821       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:36:06.682846       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1029 09:36:06.682856       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [c5f1422765f907e545795e959e1a1fd7204c59fb9f789c52d7bf772991a37142] <==
	I1029 09:36:04.373811       1 server_linux.go:53] "Using iptables proxy"
	I1029 09:36:04.494360       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 09:36:04.594966       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 09:36:04.599870       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1029 09:36:04.600027       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 09:36:04.785009       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:36:04.785060       1 server_linux.go:132] "Using iptables Proxier"
	I1029 09:36:04.790134       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 09:36:04.790577       1 server.go:527] "Version info" version="v1.34.1"
	I1029 09:36:04.790765       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:36:04.794906       1 config.go:200] "Starting service config controller"
	I1029 09:36:04.794997       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 09:36:04.795042       1 config.go:106] "Starting endpoint slice config controller"
	I1029 09:36:04.795081       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 09:36:04.796277       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 09:36:04.796379       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 09:36:04.798975       1 config.go:309] "Starting node config controller"
	I1029 09:36:04.799078       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 09:36:04.799111       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 09:36:04.895415       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 09:36:04.896632       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 09:36:04.896646       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8fb3490c8a2c3fa9b6f908aac7af524a8a6b713d4b1306444595caf0ed320c15] <==
	I1029 09:36:01.385370       1 serving.go:386] Generated self-signed cert in-memory
	W1029 09:36:02.958738       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1029 09:36:02.958833       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1029 09:36:02.958866       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1029 09:36:02.958895       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1029 09:36:03.206561       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1029 09:36:03.206590       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:36:03.234902       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1029 09:36:03.235047       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:36:03.235077       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:36:03.235096       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1029 09:36:03.335135       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 09:36:07 embed-certs-946178 kubelet[781]: I1029 09:36:07.271042     781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qtxx\" (UniqueName: \"kubernetes.io/projected/85e456db-0228-4712-8f17-2c28e9122628-kube-api-access-6qtxx\") pod \"kubernetes-dashboard-855c9754f9-9fqk4\" (UID: \"85e456db-0228-4712-8f17-2c28e9122628\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9fqk4"
	Oct 29 09:36:08 embed-certs-946178 kubelet[781]: E1029 09:36:08.382350     781 projected.go:291] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 29 09:36:08 embed-certs-946178 kubelet[781]: E1029 09:36:08.382356     781 projected.go:291] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 29 09:36:08 embed-certs-946178 kubelet[781]: E1029 09:36:08.382407     781 projected.go:196] Error preparing data for projected volume kube-api-access-clfk7 for pod kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r7gkw: failed to sync configmap cache: timed out waiting for the condition
	Oct 29 09:36:08 embed-certs-946178 kubelet[781]: E1029 09:36:08.382952     781 projected.go:196] Error preparing data for projected volume kube-api-access-6qtxx for pod kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9fqk4: failed to sync configmap cache: timed out waiting for the condition
	Oct 29 09:36:08 embed-certs-946178 kubelet[781]: E1029 09:36:08.384351     781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d3df36e-b6d5-4cd8-9172-888defbe2de0-kube-api-access-clfk7 podName:5d3df36e-b6d5-4cd8-9172-888defbe2de0 nodeName:}" failed. No retries permitted until 2025-10-29 09:36:08.883660989 +0000 UTC m=+11.586095630 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-clfk7" (UniqueName: "kubernetes.io/projected/5d3df36e-b6d5-4cd8-9172-888defbe2de0-kube-api-access-clfk7") pod "dashboard-metrics-scraper-6ffb444bf9-r7gkw" (UID: "5d3df36e-b6d5-4cd8-9172-888defbe2de0") : failed to sync configmap cache: timed out waiting for the condition
	Oct 29 09:36:08 embed-certs-946178 kubelet[781]: E1029 09:36:08.384542     781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/85e456db-0228-4712-8f17-2c28e9122628-kube-api-access-6qtxx podName:85e456db-0228-4712-8f17-2c28e9122628 nodeName:}" failed. No retries permitted until 2025-10-29 09:36:08.884516668 +0000 UTC m=+11.586951309 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6qtxx" (UniqueName: "kubernetes.io/projected/85e456db-0228-4712-8f17-2c28e9122628-kube-api-access-6qtxx") pod "kubernetes-dashboard-855c9754f9-9fqk4" (UID: "85e456db-0228-4712-8f17-2c28e9122628") : failed to sync configmap cache: timed out waiting for the condition
	Oct 29 09:36:09 embed-certs-946178 kubelet[781]: W1029 09:36:09.081342     781 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b005fccf23a74438eb05b59784e74cfa3bc0ac4e930e3c2493cdca7d2b239691/crio-7b85e3ad306ad386981ab94a6d2cfb67f8222eeb76aacdcf16ca27309c96dfe5 WatchSource:0}: Error finding container 7b85e3ad306ad386981ab94a6d2cfb67f8222eeb76aacdcf16ca27309c96dfe5: Status 404 returned error can't find the container with id 7b85e3ad306ad386981ab94a6d2cfb67f8222eeb76aacdcf16ca27309c96dfe5
	Oct 29 09:36:14 embed-certs-946178 kubelet[781]: I1029 09:36:14.688452     781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9fqk4" podStartSLOduration=2.611102741 podStartE2EDuration="7.688431591s" podCreationTimestamp="2025-10-29 09:36:07 +0000 UTC" firstStartedPulling="2025-10-29 09:36:09.063484068 +0000 UTC m=+11.765918709" lastFinishedPulling="2025-10-29 09:36:14.140812918 +0000 UTC m=+16.843247559" observedRunningTime="2025-10-29 09:36:14.677259883 +0000 UTC m=+17.379694540" watchObservedRunningTime="2025-10-29 09:36:14.688431591 +0000 UTC m=+17.390866240"
	Oct 29 09:36:19 embed-certs-946178 kubelet[781]: I1029 09:36:19.666680     781 scope.go:117] "RemoveContainer" containerID="690bc808c1df69bacafd8bc271e7c7ed1945b19db88481aef2d495e12b1502d6"
	Oct 29 09:36:20 embed-certs-946178 kubelet[781]: I1029 09:36:20.671612     781 scope.go:117] "RemoveContainer" containerID="cad9e36a641d17e73d7cc7201f5f4dda3e85d700e57ff08424a8eb5b89f05dbc"
	Oct 29 09:36:20 embed-certs-946178 kubelet[781]: I1029 09:36:20.672396     781 scope.go:117] "RemoveContainer" containerID="690bc808c1df69bacafd8bc271e7c7ed1945b19db88481aef2d495e12b1502d6"
	Oct 29 09:36:20 embed-certs-946178 kubelet[781]: E1029 09:36:20.672861     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-r7gkw_kubernetes-dashboard(5d3df36e-b6d5-4cd8-9172-888defbe2de0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r7gkw" podUID="5d3df36e-b6d5-4cd8-9172-888defbe2de0"
	Oct 29 09:36:29 embed-certs-946178 kubelet[781]: I1029 09:36:29.048972     781 scope.go:117] "RemoveContainer" containerID="cad9e36a641d17e73d7cc7201f5f4dda3e85d700e57ff08424a8eb5b89f05dbc"
	Oct 29 09:36:29 embed-certs-946178 kubelet[781]: E1029 09:36:29.049232     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-r7gkw_kubernetes-dashboard(5d3df36e-b6d5-4cd8-9172-888defbe2de0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r7gkw" podUID="5d3df36e-b6d5-4cd8-9172-888defbe2de0"
	Oct 29 09:36:34 embed-certs-946178 kubelet[781]: I1029 09:36:34.712668     781 scope.go:117] "RemoveContainer" containerID="a923c0ed0d9882028afa7a7955c093bae07f06294b087c3eb1720d7f340d0274"
	Oct 29 09:36:42 embed-certs-946178 kubelet[781]: I1029 09:36:42.518016     781 scope.go:117] "RemoveContainer" containerID="cad9e36a641d17e73d7cc7201f5f4dda3e85d700e57ff08424a8eb5b89f05dbc"
	Oct 29 09:36:42 embed-certs-946178 kubelet[781]: I1029 09:36:42.736736     781 scope.go:117] "RemoveContainer" containerID="cad9e36a641d17e73d7cc7201f5f4dda3e85d700e57ff08424a8eb5b89f05dbc"
	Oct 29 09:36:42 embed-certs-946178 kubelet[781]: I1029 09:36:42.737161     781 scope.go:117] "RemoveContainer" containerID="e2218004159ca55db5cf8be4df666cd00910c74fa38f64de4c5b3bf57a9c52d1"
	Oct 29 09:36:42 embed-certs-946178 kubelet[781]: E1029 09:36:42.737314     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-r7gkw_kubernetes-dashboard(5d3df36e-b6d5-4cd8-9172-888defbe2de0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r7gkw" podUID="5d3df36e-b6d5-4cd8-9172-888defbe2de0"
	Oct 29 09:36:49 embed-certs-946178 kubelet[781]: I1029 09:36:49.048299     781 scope.go:117] "RemoveContainer" containerID="e2218004159ca55db5cf8be4df666cd00910c74fa38f64de4c5b3bf57a9c52d1"
	Oct 29 09:36:49 embed-certs-946178 kubelet[781]: E1029 09:36:49.049251     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-r7gkw_kubernetes-dashboard(5d3df36e-b6d5-4cd8-9172-888defbe2de0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r7gkw" podUID="5d3df36e-b6d5-4cd8-9172-888defbe2de0"
	Oct 29 09:36:52 embed-certs-946178 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 29 09:36:52 embed-certs-946178 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 29 09:36:52 embed-certs-946178 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [996e12d138170502140405ed35ffef95cddee211344908ec4df83911094c14ec] <==
	2025/10/29 09:36:14 Using namespace: kubernetes-dashboard
	2025/10/29 09:36:14 Using in-cluster config to connect to apiserver
	2025/10/29 09:36:14 Using secret token for csrf signing
	2025/10/29 09:36:14 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/29 09:36:14 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/29 09:36:14 Successful initial request to the apiserver, version: v1.34.1
	2025/10/29 09:36:14 Generating JWE encryption key
	2025/10/29 09:36:14 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/29 09:36:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/29 09:36:15 Initializing JWE encryption key from synchronized object
	2025/10/29 09:36:15 Creating in-cluster Sidecar client
	2025/10/29 09:36:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/29 09:36:15 Serving insecurely on HTTP port: 9090
	2025/10/29 09:36:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/29 09:36:14 Starting overwatch
	
	
	==> storage-provisioner [92f979d951b10a84254b437e918e99e627d64ede9c787c51501596b6a7d466f7] <==
	I1029 09:36:34.809061       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1029 09:36:34.822284       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1029 09:36:34.822454       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1029 09:36:34.825052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:38.288002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:42.549400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:46.147406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:49.201656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:52.223845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:52.228952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:36:52.229167       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1029 09:36:52.229353       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-946178_fa25ca3e-810b-4313-89a7-65a3c86046ac!
	I1029 09:36:52.232439       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e7498ccd-b53a-40e1-924d-4377223b536f", APIVersion:"v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-946178_fa25ca3e-810b-4313-89a7-65a3c86046ac became leader
	W1029 09:36:52.238032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:52.246580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:36:52.332506       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-946178_fa25ca3e-810b-4313-89a7-65a3c86046ac!
	W1029 09:36:54.250590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:54.264539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [a923c0ed0d9882028afa7a7955c093bae07f06294b087c3eb1720d7f340d0274] <==
	I1029 09:36:04.043326       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1029 09:36:34.046737       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-946178 -n embed-certs-946178
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-946178 -n embed-certs-946178: exit status 2 (478.417603ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-946178 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-946178
helpers_test.go:243: (dbg) docker inspect embed-certs-946178:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b005fccf23a74438eb05b59784e74cfa3bc0ac4e930e3c2493cdca7d2b239691",
	        "Created": "2025-10-29T09:34:04.151290839Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 199214,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:35:48.64884844Z",
	            "FinishedAt": "2025-10-29T09:35:47.485114596Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/b005fccf23a74438eb05b59784e74cfa3bc0ac4e930e3c2493cdca7d2b239691/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b005fccf23a74438eb05b59784e74cfa3bc0ac4e930e3c2493cdca7d2b239691/hostname",
	        "HostsPath": "/var/lib/docker/containers/b005fccf23a74438eb05b59784e74cfa3bc0ac4e930e3c2493cdca7d2b239691/hosts",
	        "LogPath": "/var/lib/docker/containers/b005fccf23a74438eb05b59784e74cfa3bc0ac4e930e3c2493cdca7d2b239691/b005fccf23a74438eb05b59784e74cfa3bc0ac4e930e3c2493cdca7d2b239691-json.log",
	        "Name": "/embed-certs-946178",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-946178:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-946178",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b005fccf23a74438eb05b59784e74cfa3bc0ac4e930e3c2493cdca7d2b239691",
	                "LowerDir": "/var/lib/docker/overlay2/0e4b8a36d03e2aa5ecd176b333f544932579c1dad010690bf16775b13c5b7cee-init/diff:/var/lib/docker/overlay2/512c003c31c6c889d7180677d0a7d04f9641d651bb7c0f829b95ca7e47c2836c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0e4b8a36d03e2aa5ecd176b333f544932579c1dad010690bf16775b13c5b7cee/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0e4b8a36d03e2aa5ecd176b333f544932579c1dad010690bf16775b13c5b7cee/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0e4b8a36d03e2aa5ecd176b333f544932579c1dad010690bf16775b13c5b7cee/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-946178",
	                "Source": "/var/lib/docker/volumes/embed-certs-946178/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-946178",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-946178",
	                "name.minikube.sigs.k8s.io": "embed-certs-946178",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9e6b111990b4d7fb35c76796e66715d64efdd480f5d2f5bb1562cfe3843e4566",
	            "SandboxKey": "/var/run/docker/netns/9e6b111990b4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-946178": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:04:0f:be:74:e8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "58e14a6bd5919ac00c4f79c5de1533110411df785cd7d398ccc05d5f98f62442",
	                    "EndpointID": "20bd8bbc13e2a98c0c1ae2f30ed4850f24093b2809646ac40bd1b95880e8b38c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-946178",
	                        "b005fccf23a7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-946178 -n embed-certs-946178
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-946178 -n embed-certs-946178: exit status 2 (454.252268ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-946178 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-946178 logs -n 25: (1.554722175s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-162751 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-162751       │ jenkins │ v1.37.0 │ 29 Oct 25 09:32 UTC │ 29 Oct 25 09:33 UTC │
	│ start   │ -p cert-expiration-690444 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-690444       │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ image   │ old-k8s-version-162751 image list --format=json                                                                                                                                                                                               │ old-k8s-version-162751       │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ pause   │ -p old-k8s-version-162751 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-162751       │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │                     │
	│ delete  │ -p old-k8s-version-162751                                                                                                                                                                                                                     │ old-k8s-version-162751       │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ delete  │ -p old-k8s-version-162751                                                                                                                                                                                                                     │ old-k8s-version-162751       │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ start   │ -p no-preload-505993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:35 UTC │
	│ delete  │ -p cert-expiration-690444                                                                                                                                                                                                                     │ cert-expiration-690444       │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ start   │ -p embed-certs-946178 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:35 UTC │
	│ addons  │ enable metrics-server -p no-preload-505993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │                     │
	│ stop    │ -p no-preload-505993 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:35 UTC │
	│ addons  │ enable dashboard -p no-preload-505993 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:35 UTC │
	│ start   │ -p no-preload-505993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-946178 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │                     │
	│ stop    │ -p embed-certs-946178 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-946178 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:35 UTC │
	│ start   │ -p embed-certs-946178 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:36 UTC │
	│ image   │ no-preload-505993 image list --format=json                                                                                                                                                                                                    │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ pause   │ -p no-preload-505993 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │                     │
	│ delete  │ -p no-preload-505993                                                                                                                                                                                                                          │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ delete  │ -p no-preload-505993                                                                                                                                                                                                                          │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ delete  │ -p disable-driver-mounts-012564                                                                                                                                                                                                               │ disable-driver-mounts-012564 │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ start   │ -p default-k8s-diff-port-154565 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-154565 │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │                     │
	│ image   │ embed-certs-946178 image list --format=json                                                                                                                                                                                                   │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ pause   │ -p embed-certs-946178 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:36:42
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:36:42.181714  202937 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:36:42.182940  202937 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:36:42.182957  202937 out.go:374] Setting ErrFile to fd 2...
	I1029 09:36:42.182963  202937 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:36:42.183416  202937 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 09:36:42.184024  202937 out.go:368] Setting JSON to false
	I1029 09:36:42.185098  202937 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4754,"bootTime":1761725848,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1029 09:36:42.185194  202937 start.go:143] virtualization:  
	I1029 09:36:42.189511  202937 out.go:179] * [default-k8s-diff-port-154565] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1029 09:36:42.194065  202937 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:36:42.194253  202937 notify.go:221] Checking for updates...
	I1029 09:36:42.200844  202937 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:36:42.204237  202937 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:36:42.207538  202937 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	I1029 09:36:42.211047  202937 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1029 09:36:42.214224  202937 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:36:42.218247  202937 config.go:182] Loaded profile config "embed-certs-946178": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:36:42.218453  202937 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:36:42.255302  202937 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1029 09:36:42.255467  202937 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:36:42.336779  202937 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-29 09:36:42.326745732 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:36:42.336893  202937 docker.go:319] overlay module found
	I1029 09:36:42.340161  202937 out.go:179] * Using the docker driver based on user configuration
	I1029 09:36:42.343132  202937 start.go:309] selected driver: docker
	I1029 09:36:42.343153  202937 start.go:930] validating driver "docker" against <nil>
	I1029 09:36:42.343166  202937 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:36:42.343929  202937 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:36:42.406824  202937 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-29 09:36:42.396898887 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:36:42.406980  202937 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1029 09:36:42.407214  202937 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:36:42.410332  202937 out.go:179] * Using Docker driver with root privileges
	I1029 09:36:42.413368  202937 cni.go:84] Creating CNI manager for ""
	I1029 09:36:42.413440  202937 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:36:42.413453  202937 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1029 09:36:42.413540  202937 start.go:353] cluster config:
	{Name:default-k8s-diff-port-154565 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-154565 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:36:42.418572  202937 out.go:179] * Starting "default-k8s-diff-port-154565" primary control-plane node in "default-k8s-diff-port-154565" cluster
	I1029 09:36:42.421347  202937 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:36:42.424675  202937 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:36:42.427462  202937 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:36:42.427490  202937 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:36:42.427522  202937 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1029 09:36:42.427531  202937 cache.go:59] Caching tarball of preloaded images
	I1029 09:36:42.427611  202937 preload.go:233] Found /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1029 09:36:42.427621  202937 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 09:36:42.427726  202937 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/config.json ...
	I1029 09:36:42.427742  202937 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/config.json: {Name:mkfbd6d6bbc51eb8a3e524e494228ae4772192a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:36:42.448770  202937 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:36:42.448795  202937 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:36:42.448810  202937 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:36:42.448879  202937 start.go:360] acquireMachinesLock for default-k8s-diff-port-154565: {Name:mk949f3a944b6d0d5624c677fdcfbf59ea2f05b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:36:42.449027  202937 start.go:364] duration metric: took 124.26µs to acquireMachinesLock for "default-k8s-diff-port-154565"
	I1029 09:36:42.449058  202937 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-154565 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-154565 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:36:42.449145  202937 start.go:125] createHost starting for "" (driver="docker")
	I1029 09:36:42.454489  202937 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1029 09:36:42.454734  202937 start.go:159] libmachine.API.Create for "default-k8s-diff-port-154565" (driver="docker")
	I1029 09:36:42.454781  202937 client.go:173] LocalClient.Create starting
	I1029 09:36:42.454864  202937 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem
	I1029 09:36:42.454904  202937 main.go:143] libmachine: Decoding PEM data...
	I1029 09:36:42.454918  202937 main.go:143] libmachine: Parsing certificate...
	I1029 09:36:42.454980  202937 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem
	I1029 09:36:42.455006  202937 main.go:143] libmachine: Decoding PEM data...
	I1029 09:36:42.455017  202937 main.go:143] libmachine: Parsing certificate...
	I1029 09:36:42.455391  202937 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-154565 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1029 09:36:42.471681  202937 cli_runner.go:211] docker network inspect default-k8s-diff-port-154565 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1029 09:36:42.471778  202937 network_create.go:284] running [docker network inspect default-k8s-diff-port-154565] to gather additional debugging logs...
	I1029 09:36:42.471800  202937 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-154565
	W1029 09:36:42.486741  202937 cli_runner.go:211] docker network inspect default-k8s-diff-port-154565 returned with exit code 1
	I1029 09:36:42.486783  202937 network_create.go:287] error running [docker network inspect default-k8s-diff-port-154565]: docker network inspect default-k8s-diff-port-154565: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-154565 not found
	I1029 09:36:42.486798  202937 network_create.go:289] output of [docker network inspect default-k8s-diff-port-154565]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-154565 not found
	
	** /stderr **
	I1029 09:36:42.486904  202937 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:36:42.503611  202937 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0687088684ea IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8e:e2:78:39:db:9c} reservation:<nil>}
	I1029 09:36:42.503936  202937 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b2a2304196dd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8e:c9:a9:e0:d0:7a} reservation:<nil>}
	I1029 09:36:42.504260  202937 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e863a0178057 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:86:09:fc:5e:55} reservation:<nil>}
	I1029 09:36:42.504881  202937 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e3130}
	I1029 09:36:42.504919  202937 network_create.go:124] attempt to create docker network default-k8s-diff-port-154565 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1029 09:36:42.504986  202937 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-154565 default-k8s-diff-port-154565
	I1029 09:36:42.590299  202937 network_create.go:108] docker network default-k8s-diff-port-154565 192.168.76.0/24 created
	I1029 09:36:42.590336  202937 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-154565" container
	I1029 09:36:42.590420  202937 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1029 09:36:42.607607  202937 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-154565 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-154565 --label created_by.minikube.sigs.k8s.io=true
	I1029 09:36:42.631316  202937 oci.go:103] Successfully created a docker volume default-k8s-diff-port-154565
	I1029 09:36:42.631437  202937 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-154565-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-154565 --entrypoint /usr/bin/test -v default-k8s-diff-port-154565:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1029 09:36:43.268151  202937 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-154565
	I1029 09:36:43.268212  202937 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:36:43.268233  202937 kic.go:194] Starting extracting preloaded images to volume ...
	I1029 09:36:43.268416  202937 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-154565:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1029 09:36:47.806554  202937 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-154565:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.538091105s)
	I1029 09:36:47.806589  202937 kic.go:203] duration metric: took 4.538352268s to extract preloaded images to volume ...
	W1029 09:36:47.806745  202937 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1029 09:36:47.806860  202937 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1029 09:36:47.865368  202937 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-154565 --name default-k8s-diff-port-154565 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-154565 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-154565 --network default-k8s-diff-port-154565 --ip 192.168.76.2 --volume default-k8s-diff-port-154565:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1029 09:36:48.214090  202937 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-154565 --format={{.State.Running}}
	I1029 09:36:48.236080  202937 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-154565 --format={{.State.Status}}
	I1029 09:36:48.264641  202937 cli_runner.go:164] Run: docker exec default-k8s-diff-port-154565 stat /var/lib/dpkg/alternatives/iptables
	I1029 09:36:48.326570  202937 oci.go:144] the created container "default-k8s-diff-port-154565" has a running status.
	I1029 09:36:48.326596  202937 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/default-k8s-diff-port-154565/id_rsa...
	I1029 09:36:48.753134  202937 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21800-2763/.minikube/machines/default-k8s-diff-port-154565/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1029 09:36:48.781384  202937 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-154565 --format={{.State.Status}}
	I1029 09:36:48.805839  202937 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1029 09:36:48.805857  202937 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-154565 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1029 09:36:48.862790  202937 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-154565 --format={{.State.Status}}
	I1029 09:36:48.891646  202937 machine.go:94] provisionDockerMachine start ...
	I1029 09:36:48.891747  202937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:36:48.920666  202937 main.go:143] libmachine: Using SSH client type: native
	I1029 09:36:48.921001  202937 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1029 09:36:48.921011  202937 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 09:36:48.923755  202937 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60746->127.0.0.1:33073: read: connection reset by peer
	I1029 09:36:52.080270  202937 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-154565
	
	I1029 09:36:52.080292  202937 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-154565"
	I1029 09:36:52.080412  202937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:36:52.114120  202937 main.go:143] libmachine: Using SSH client type: native
	I1029 09:36:52.114422  202937 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1029 09:36:52.114439  202937 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-154565 && echo "default-k8s-diff-port-154565" | sudo tee /etc/hostname
	I1029 09:36:52.285998  202937 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-154565
	
	I1029 09:36:52.286088  202937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:36:52.306005  202937 main.go:143] libmachine: Using SSH client type: native
	I1029 09:36:52.306305  202937 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1029 09:36:52.306322  202937 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-154565' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-154565/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-154565' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 09:36:52.472645  202937 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 09:36:52.472671  202937 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-2763/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-2763/.minikube}
	I1029 09:36:52.472758  202937 ubuntu.go:190] setting up certificates
	I1029 09:36:52.472768  202937 provision.go:84] configureAuth start
	I1029 09:36:52.472845  202937 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-154565
	I1029 09:36:52.491445  202937 provision.go:143] copyHostCerts
	I1029 09:36:52.491527  202937 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem, removing ...
	I1029 09:36:52.491538  202937 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 09:36:52.491617  202937 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem (1082 bytes)
	I1029 09:36:52.491717  202937 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem, removing ...
	I1029 09:36:52.491723  202937 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 09:36:52.491749  202937 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem (1123 bytes)
	I1029 09:36:52.491804  202937 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem, removing ...
	I1029 09:36:52.491809  202937 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 09:36:52.491832  202937 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem (1679 bytes)
	I1029 09:36:52.491880  202937 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-154565 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-154565 localhost minikube]
	I1029 09:36:52.765817  202937 provision.go:177] copyRemoteCerts
	I1029 09:36:52.765977  202937 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 09:36:52.766044  202937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:36:52.783735  202937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/default-k8s-diff-port-154565/id_rsa Username:docker}
	I1029 09:36:52.893580  202937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1029 09:36:52.919773  202937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 09:36:52.938743  202937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1029 09:36:52.956586  202937 provision.go:87] duration metric: took 483.788749ms to configureAuth
	I1029 09:36:52.956671  202937 ubuntu.go:206] setting minikube options for container-runtime
	I1029 09:36:52.956890  202937 config.go:182] Loaded profile config "default-k8s-diff-port-154565": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:36:52.957006  202937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:36:52.973769  202937 main.go:143] libmachine: Using SSH client type: native
	I1029 09:36:52.974087  202937 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1029 09:36:52.974109  202937 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 09:36:53.320143  202937 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 09:36:53.320178  202937 machine.go:97] duration metric: took 4.428503431s to provisionDockerMachine
	I1029 09:36:53.320190  202937 client.go:176] duration metric: took 10.865397036s to LocalClient.Create
	I1029 09:36:53.320205  202937 start.go:167] duration metric: took 10.86547135s to libmachine.API.Create "default-k8s-diff-port-154565"
	I1029 09:36:53.320216  202937 start.go:293] postStartSetup for "default-k8s-diff-port-154565" (driver="docker")
	I1029 09:36:53.320231  202937 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 09:36:53.320352  202937 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 09:36:53.320404  202937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:36:53.342736  202937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/default-k8s-diff-port-154565/id_rsa Username:docker}
	I1029 09:36:53.450025  202937 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 09:36:53.455416  202937 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 09:36:53.455494  202937 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 09:36:53.455522  202937 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/addons for local assets ...
	I1029 09:36:53.455611  202937 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/files for local assets ...
	I1029 09:36:53.455756  202937 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> 45502.pem in /etc/ssl/certs
	I1029 09:36:53.455926  202937 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 09:36:53.464670  202937 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /etc/ssl/certs/45502.pem (1708 bytes)
	I1029 09:36:53.483398  202937 start.go:296] duration metric: took 163.163779ms for postStartSetup
	I1029 09:36:53.483789  202937 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-154565
	I1029 09:36:53.502257  202937 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/config.json ...
	I1029 09:36:53.502537  202937 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 09:36:53.502586  202937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:36:53.532332  202937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/default-k8s-diff-port-154565/id_rsa Username:docker}
	I1029 09:36:53.643543  202937 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 09:36:53.649770  202937 start.go:128] duration metric: took 11.200609934s to createHost
	I1029 09:36:53.649797  202937 start.go:83] releasing machines lock for "default-k8s-diff-port-154565", held for 11.200757234s
	I1029 09:36:53.649877  202937 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-154565
	I1029 09:36:53.669580  202937 ssh_runner.go:195] Run: cat /version.json
	I1029 09:36:53.669646  202937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:36:53.669922  202937 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 09:36:53.669989  202937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:36:53.691608  202937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/default-k8s-diff-port-154565/id_rsa Username:docker}
	I1029 09:36:53.707624  202937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/default-k8s-diff-port-154565/id_rsa Username:docker}
	I1029 09:36:53.812355  202937 ssh_runner.go:195] Run: systemctl --version
	I1029 09:36:53.928470  202937 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 09:36:53.989843  202937 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 09:36:53.997984  202937 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 09:36:53.998059  202937 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 09:36:54.044994  202937 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1029 09:36:54.045014  202937 start.go:496] detecting cgroup driver to use...
	I1029 09:36:54.045048  202937 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1029 09:36:54.045098  202937 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 09:36:54.078741  202937 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 09:36:54.094231  202937 docker.go:218] disabling cri-docker service (if available) ...
	I1029 09:36:54.094305  202937 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 09:36:54.112482  202937 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 09:36:54.132744  202937 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 09:36:54.316392  202937 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 09:36:54.483307  202937 docker.go:234] disabling docker service ...
	I1029 09:36:54.483392  202937 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 09:36:54.516425  202937 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 09:36:54.534907  202937 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 09:36:54.674391  202937 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 09:36:54.844842  202937 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 09:36:54.859985  202937 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 09:36:54.875710  202937 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 09:36:54.875772  202937 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:36:54.887279  202937 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1029 09:36:54.887359  202937 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:36:54.899743  202937 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:36:54.908787  202937 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:36:54.922335  202937 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 09:36:54.932018  202937 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:36:54.942149  202937 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:36:54.961817  202937 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:36:54.973502  202937 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 09:36:54.982210  202937 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 09:36:54.992561  202937 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:36:55.154333  202937 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 09:36:55.297342  202937 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 09:36:55.297432  202937 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 09:36:55.301854  202937 start.go:564] Will wait 60s for crictl version
	I1029 09:36:55.301923  202937 ssh_runner.go:195] Run: which crictl
	I1029 09:36:55.305917  202937 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 09:36:55.335361  202937 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 09:36:55.335491  202937 ssh_runner.go:195] Run: crio --version
	I1029 09:36:55.382778  202937 ssh_runner.go:195] Run: crio --version
	I1029 09:36:55.429148  202937 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	
	==> CRI-O <==
	Oct 29 09:36:42 embed-certs-946178 crio[653]: time="2025-10-29T09:36:42.520888909Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8d1d1d71-ef94-4df5-95ad-41aa466330a7 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:36:42 embed-certs-946178 crio[653]: time="2025-10-29T09:36:42.522686521Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=aa951b0f-f1d0-4e2f-9284-4ba488fa4fcd name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:36:42 embed-certs-946178 crio[653]: time="2025-10-29T09:36:42.525601279Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r7gkw/dashboard-metrics-scraper" id=519a8571-0e8c-4d06-b67b-e84ac75da46f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:36:42 embed-certs-946178 crio[653]: time="2025-10-29T09:36:42.525720049Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:36:42 embed-certs-946178 crio[653]: time="2025-10-29T09:36:42.543958028Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:36:42 embed-certs-946178 crio[653]: time="2025-10-29T09:36:42.545061217Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:36:42 embed-certs-946178 crio[653]: time="2025-10-29T09:36:42.566631427Z" level=info msg="Created container e2218004159ca55db5cf8be4df666cd00910c74fa38f64de4c5b3bf57a9c52d1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r7gkw/dashboard-metrics-scraper" id=519a8571-0e8c-4d06-b67b-e84ac75da46f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:36:42 embed-certs-946178 crio[653]: time="2025-10-29T09:36:42.568213751Z" level=info msg="Starting container: e2218004159ca55db5cf8be4df666cd00910c74fa38f64de4c5b3bf57a9c52d1" id=509a6c10-707b-4219-a34c-af3f0cb468fb name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:36:42 embed-certs-946178 crio[653]: time="2025-10-29T09:36:42.570045463Z" level=info msg="Started container" PID=1679 containerID=e2218004159ca55db5cf8be4df666cd00910c74fa38f64de4c5b3bf57a9c52d1 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r7gkw/dashboard-metrics-scraper id=509a6c10-707b-4219-a34c-af3f0cb468fb name=/runtime.v1.RuntimeService/StartContainer sandboxID=7b85e3ad306ad386981ab94a6d2cfb67f8222eeb76aacdcf16ca27309c96dfe5
	Oct 29 09:36:42 embed-certs-946178 conmon[1677]: conmon e2218004159ca55db5cf <ninfo>: container 1679 exited with status 1
	Oct 29 09:36:42 embed-certs-946178 crio[653]: time="2025-10-29T09:36:42.739058728Z" level=info msg="Removing container: cad9e36a641d17e73d7cc7201f5f4dda3e85d700e57ff08424a8eb5b89f05dbc" id=154cbc07-5ec5-49ea-9ada-b23e8397b9d8 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:36:42 embed-certs-946178 crio[653]: time="2025-10-29T09:36:42.750713937Z" level=info msg="Error loading conmon cgroup of container cad9e36a641d17e73d7cc7201f5f4dda3e85d700e57ff08424a8eb5b89f05dbc: cgroup deleted" id=154cbc07-5ec5-49ea-9ada-b23e8397b9d8 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:36:42 embed-certs-946178 crio[653]: time="2025-10-29T09:36:42.75438679Z" level=info msg="Removed container cad9e36a641d17e73d7cc7201f5f4dda3e85d700e57ff08424a8eb5b89f05dbc: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r7gkw/dashboard-metrics-scraper" id=154cbc07-5ec5-49ea-9ada-b23e8397b9d8 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:36:44 embed-certs-946178 crio[653]: time="2025-10-29T09:36:44.460003166Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:36:44 embed-certs-946178 crio[653]: time="2025-10-29T09:36:44.464431603Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:36:44 embed-certs-946178 crio[653]: time="2025-10-29T09:36:44.464590021Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:36:44 embed-certs-946178 crio[653]: time="2025-10-29T09:36:44.46467762Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:36:44 embed-certs-946178 crio[653]: time="2025-10-29T09:36:44.468768766Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:36:44 embed-certs-946178 crio[653]: time="2025-10-29T09:36:44.468921014Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:36:44 embed-certs-946178 crio[653]: time="2025-10-29T09:36:44.469003049Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:36:44 embed-certs-946178 crio[653]: time="2025-10-29T09:36:44.478149239Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:36:44 embed-certs-946178 crio[653]: time="2025-10-29T09:36:44.478321245Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:36:44 embed-certs-946178 crio[653]: time="2025-10-29T09:36:44.478398808Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:36:44 embed-certs-946178 crio[653]: time="2025-10-29T09:36:44.481778769Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:36:44 embed-certs-946178 crio[653]: time="2025-10-29T09:36:44.481954836Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	e2218004159ca       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago      Exited              dashboard-metrics-scraper   2                   7b85e3ad306ad       dashboard-metrics-scraper-6ffb444bf9-r7gkw   kubernetes-dashboard
	92f979d951b10       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           23 seconds ago      Running             storage-provisioner         2                   f9c2ddf68457f       storage-provisioner                          kube-system
	996e12d138170       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   43 seconds ago      Running             kubernetes-dashboard        0                   9fc75eaaddb4b       kubernetes-dashboard-855c9754f9-9fqk4        kubernetes-dashboard
	36a804c0c5629       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           53 seconds ago      Running             coredns                     1                   73cfdbf2315df       coredns-66bc5c9577-fszff                     kube-system
	6a6dfee6bac4f       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago      Running             busybox                     1                   d2357829908e2       busybox                                      default
	07d7e42fa9617       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago      Running             kindnet-cni                 1                   2db4187e4229c       kindnet-8lf6r                                kube-system
	c5f1422765f90       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           53 seconds ago      Running             kube-proxy                  1                   4347fb9bc9d13       kube-proxy-8zwf2                             kube-system
	a923c0ed0d988       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           53 seconds ago      Exited              storage-provisioner         1                   f9c2ddf68457f       storage-provisioner                          kube-system
	8fb3490c8a2c3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           59 seconds ago      Running             kube-scheduler              1                   0c6d059bc581a       kube-scheduler-embed-certs-946178            kube-system
	1eca250e7dd68       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           59 seconds ago      Running             kube-apiserver              1                   e6c34750dd04d       kube-apiserver-embed-certs-946178            kube-system
	0d84906ed693b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           59 seconds ago      Running             kube-controller-manager     1                   f31d0d1b68b7b       kube-controller-manager-embed-certs-946178   kube-system
	9ba572ee5a49b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           59 seconds ago      Running             etcd                        1                   f55d42a044c62       etcd-embed-certs-946178                      kube-system
	
	
	==> coredns [36a804c0c5629e7001b18516f7faeb77607a1ec446a9dc1dfbac911a500eed0a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55051 - 32422 "HINFO IN 7857672369422735980.4433715007118956098. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02404031s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-946178
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-946178
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=embed-certs-946178
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_34_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:34:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-946178
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:36:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:36:33 +0000   Wed, 29 Oct 2025 09:34:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:36:33 +0000   Wed, 29 Oct 2025 09:34:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:36:33 +0000   Wed, 29 Oct 2025 09:34:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 09:36:33 +0000   Wed, 29 Oct 2025 09:35:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-946178
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                3602b941-fa8a-4d9a-9349-a96421b2f60b
	  Boot ID:                    dcb5a4aa-84b0-4edf-aeb7-a96ccf6c0882
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-fszff                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m20s
	  kube-system                 etcd-embed-certs-946178                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m24s
	  kube-system                 kindnet-8lf6r                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m20s
	  kube-system                 kube-apiserver-embed-certs-946178             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-controller-manager-embed-certs-946178    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-proxy-8zwf2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-scheduler-embed-certs-946178             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-r7gkw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-9fqk4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m18s                  kube-proxy       
	  Normal   Starting                 53s                    kube-proxy       
	  Normal   Starting                 2m35s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m35s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m35s (x8 over 2m35s)  kubelet          Node embed-certs-946178 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m35s (x8 over 2m35s)  kubelet          Node embed-certs-946178 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m35s (x8 over 2m35s)  kubelet          Node embed-certs-946178 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m25s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m25s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m24s                  kubelet          Node embed-certs-946178 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m24s                  kubelet          Node embed-certs-946178 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m24s                  kubelet          Node embed-certs-946178 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m21s                  node-controller  Node embed-certs-946178 event: Registered Node embed-certs-946178 in Controller
	  Normal   NodeReady                98s                    kubelet          Node embed-certs-946178 status is now: NodeReady
	  Normal   Starting                 61s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s (x8 over 61s)      kubelet          Node embed-certs-946178 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 61s)      kubelet          Node embed-certs-946178 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x8 over 61s)      kubelet          Node embed-certs-946178 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                    node-controller  Node embed-certs-946178 event: Registered Node embed-certs-946178 in Controller
	
	
	==> dmesg <==
	[Oct29 09:08] overlayfs: idmapped layers are currently not supported
	[Oct29 09:10] overlayfs: idmapped layers are currently not supported
	[ +24.018500] overlayfs: idmapped layers are currently not supported
	[  +4.070732] overlayfs: idmapped layers are currently not supported
	[Oct29 09:11] overlayfs: idmapped layers are currently not supported
	[ +18.424492] overlayfs: idmapped layers are currently not supported
	[  +4.342269] hrtimer: interrupt took 2289025 ns
	[Oct29 09:12] overlayfs: idmapped layers are currently not supported
	[Oct29 09:13] overlayfs: idmapped layers are currently not supported
	[Oct29 09:14] overlayfs: idmapped layers are currently not supported
	[Oct29 09:20] overlayfs: idmapped layers are currently not supported
	[Oct29 09:23] overlayfs: idmapped layers are currently not supported
	[Oct29 09:24] overlayfs: idmapped layers are currently not supported
	[ +30.917844] overlayfs: idmapped layers are currently not supported
	[Oct29 09:27] overlayfs: idmapped layers are currently not supported
	[Oct29 09:29] overlayfs: idmapped layers are currently not supported
	[Oct29 09:30] overlayfs: idmapped layers are currently not supported
	[  +5.608805] overlayfs: idmapped layers are currently not supported
	[ +37.422429] overlayfs: idmapped layers are currently not supported
	[Oct29 09:31] overlayfs: idmapped layers are currently not supported
	[Oct29 09:32] overlayfs: idmapped layers are currently not supported
	[Oct29 09:34] overlayfs: idmapped layers are currently not supported
	[ +22.728709] overlayfs: idmapped layers are currently not supported
	[Oct29 09:35] overlayfs: idmapped layers are currently not supported
	[ +21.902387] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9ba572ee5a49b071c9887b1b7536d698adcfa4c4fe872393a5200107f89ce91a] <==
	{"level":"warn","ts":"2025-10-29T09:36:01.603088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:01.641060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:01.653156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:01.694279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:01.710546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:01.723941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:01.746797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:01.765569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:01.806242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:01.834024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:01.845313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:01.882857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:01.884779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:01.905933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:01.946314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:01.963285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:01.982840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:02.004622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:02.019574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:02.036203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:02.061288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:02.098286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:02.118222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:02.137279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:36:02.244898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38066","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:36:58 up  1:19,  0 user,  load average: 3.09, 3.59, 2.86
	Linux embed-certs-946178 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [07d7e42fa96175c53a244265ef556c75d7caea96ca747163e76e54182722faa4] <==
	I1029 09:36:04.275085       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:36:04.275589       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1029 09:36:04.275776       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:36:04.275819       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:36:04.275860       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:36:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:36:04.459580       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:36:04.459599       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:36:04.459608       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:36:04.459890       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1029 09:36:34.459550       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1029 09:36:34.459807       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1029 09:36:34.459910       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1029 09:36:34.464395       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1029 09:36:35.759861       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 09:36:35.759891       1 metrics.go:72] Registering metrics
	I1029 09:36:35.759961       1 controller.go:711] "Syncing nftables rules"
	I1029 09:36:44.459732       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1029 09:36:44.459786       1 main.go:301] handling current node
	I1029 09:36:54.464424       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1029 09:36:54.464465       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1eca250e7dd68ca1de609c5e6810695c68eaea3b51a86f93331e6d7205acad68] <==
	I1029 09:36:03.249686       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1029 09:36:03.249727       1 policy_source.go:240] refreshing policies
	I1029 09:36:03.252420       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1029 09:36:03.252474       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1029 09:36:03.253417       1 aggregator.go:171] initial CRD sync complete...
	I1029 09:36:03.253447       1 autoregister_controller.go:144] Starting autoregister controller
	I1029 09:36:03.253456       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1029 09:36:03.253463       1 cache.go:39] Caches are synced for autoregister controller
	I1029 09:36:03.264703       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1029 09:36:03.324573       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1029 09:36:03.324896       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1029 09:36:03.337810       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1029 09:36:03.385599       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1029 09:36:03.571812       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 09:36:03.827262       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:36:04.390974       1 controller.go:667] quota admission added evaluator for: namespaces
	I1029 09:36:04.448428       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1029 09:36:04.490532       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:36:04.505627       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1029 09:36:04.698900       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1029 09:36:04.700259       1 controller.go:667] quota admission added evaluator for: endpoints
	I1029 09:36:04.721274       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.22.15"}
	I1029 09:36:04.727034       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1029 09:36:04.773472       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.186.6"}
	I1029 09:36:06.846384       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [0d84906ed693bbd1f66a0d46ac97dbb716c04201acaa1b9f85ffecdd60d49365] <==
	I1029 09:36:06.619748       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1029 09:36:06.619892       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1029 09:36:06.619920       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1029 09:36:06.632782       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1029 09:36:06.632804       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1029 09:36:06.632815       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1029 09:36:06.637114       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1029 09:36:06.641930       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1029 09:36:06.642935       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1029 09:36:06.643283       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1029 09:36:06.643654       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1029 09:36:06.643720       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1029 09:36:06.643793       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1029 09:36:06.643837       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1029 09:36:06.644018       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1029 09:36:06.644106       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1029 09:36:06.644448       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1029 09:36:06.646103       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1029 09:36:06.647650       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1029 09:36:06.647762       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:36:06.652678       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:36:06.656035       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1029 09:36:06.682821       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:36:06.682846       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1029 09:36:06.682856       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [c5f1422765f907e545795e959e1a1fd7204c59fb9f789c52d7bf772991a37142] <==
	I1029 09:36:04.373811       1 server_linux.go:53] "Using iptables proxy"
	I1029 09:36:04.494360       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 09:36:04.594966       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 09:36:04.599870       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1029 09:36:04.600027       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 09:36:04.785009       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:36:04.785060       1 server_linux.go:132] "Using iptables Proxier"
	I1029 09:36:04.790134       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 09:36:04.790577       1 server.go:527] "Version info" version="v1.34.1"
	I1029 09:36:04.790765       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:36:04.794906       1 config.go:200] "Starting service config controller"
	I1029 09:36:04.794997       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 09:36:04.795042       1 config.go:106] "Starting endpoint slice config controller"
	I1029 09:36:04.795081       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 09:36:04.796277       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 09:36:04.796379       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 09:36:04.798975       1 config.go:309] "Starting node config controller"
	I1029 09:36:04.799078       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 09:36:04.799111       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 09:36:04.895415       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 09:36:04.896632       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 09:36:04.896646       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8fb3490c8a2c3fa9b6f908aac7af524a8a6b713d4b1306444595caf0ed320c15] <==
	I1029 09:36:01.385370       1 serving.go:386] Generated self-signed cert in-memory
	W1029 09:36:02.958738       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1029 09:36:02.958833       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1029 09:36:02.958866       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1029 09:36:02.958895       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1029 09:36:03.206561       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1029 09:36:03.206590       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:36:03.234902       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1029 09:36:03.235047       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:36:03.235077       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:36:03.235096       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1029 09:36:03.335135       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 09:36:07 embed-certs-946178 kubelet[781]: I1029 09:36:07.271042     781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qtxx\" (UniqueName: \"kubernetes.io/projected/85e456db-0228-4712-8f17-2c28e9122628-kube-api-access-6qtxx\") pod \"kubernetes-dashboard-855c9754f9-9fqk4\" (UID: \"85e456db-0228-4712-8f17-2c28e9122628\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9fqk4"
	Oct 29 09:36:08 embed-certs-946178 kubelet[781]: E1029 09:36:08.382350     781 projected.go:291] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 29 09:36:08 embed-certs-946178 kubelet[781]: E1029 09:36:08.382356     781 projected.go:291] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 29 09:36:08 embed-certs-946178 kubelet[781]: E1029 09:36:08.382407     781 projected.go:196] Error preparing data for projected volume kube-api-access-clfk7 for pod kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r7gkw: failed to sync configmap cache: timed out waiting for the condition
	Oct 29 09:36:08 embed-certs-946178 kubelet[781]: E1029 09:36:08.382952     781 projected.go:196] Error preparing data for projected volume kube-api-access-6qtxx for pod kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9fqk4: failed to sync configmap cache: timed out waiting for the condition
	Oct 29 09:36:08 embed-certs-946178 kubelet[781]: E1029 09:36:08.384351     781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d3df36e-b6d5-4cd8-9172-888defbe2de0-kube-api-access-clfk7 podName:5d3df36e-b6d5-4cd8-9172-888defbe2de0 nodeName:}" failed. No retries permitted until 2025-10-29 09:36:08.883660989 +0000 UTC m=+11.586095630 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-clfk7" (UniqueName: "kubernetes.io/projected/5d3df36e-b6d5-4cd8-9172-888defbe2de0-kube-api-access-clfk7") pod "dashboard-metrics-scraper-6ffb444bf9-r7gkw" (UID: "5d3df36e-b6d5-4cd8-9172-888defbe2de0") : failed to sync configmap cache: timed out waiting for the condition
	Oct 29 09:36:08 embed-certs-946178 kubelet[781]: E1029 09:36:08.384542     781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/85e456db-0228-4712-8f17-2c28e9122628-kube-api-access-6qtxx podName:85e456db-0228-4712-8f17-2c28e9122628 nodeName:}" failed. No retries permitted until 2025-10-29 09:36:08.884516668 +0000 UTC m=+11.586951309 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6qtxx" (UniqueName: "kubernetes.io/projected/85e456db-0228-4712-8f17-2c28e9122628-kube-api-access-6qtxx") pod "kubernetes-dashboard-855c9754f9-9fqk4" (UID: "85e456db-0228-4712-8f17-2c28e9122628") : failed to sync configmap cache: timed out waiting for the condition
	Oct 29 09:36:09 embed-certs-946178 kubelet[781]: W1029 09:36:09.081342     781 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b005fccf23a74438eb05b59784e74cfa3bc0ac4e930e3c2493cdca7d2b239691/crio-7b85e3ad306ad386981ab94a6d2cfb67f8222eeb76aacdcf16ca27309c96dfe5 WatchSource:0}: Error finding container 7b85e3ad306ad386981ab94a6d2cfb67f8222eeb76aacdcf16ca27309c96dfe5: Status 404 returned error can't find the container with id 7b85e3ad306ad386981ab94a6d2cfb67f8222eeb76aacdcf16ca27309c96dfe5
	Oct 29 09:36:14 embed-certs-946178 kubelet[781]: I1029 09:36:14.688452     781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9fqk4" podStartSLOduration=2.611102741 podStartE2EDuration="7.688431591s" podCreationTimestamp="2025-10-29 09:36:07 +0000 UTC" firstStartedPulling="2025-10-29 09:36:09.063484068 +0000 UTC m=+11.765918709" lastFinishedPulling="2025-10-29 09:36:14.140812918 +0000 UTC m=+16.843247559" observedRunningTime="2025-10-29 09:36:14.677259883 +0000 UTC m=+17.379694540" watchObservedRunningTime="2025-10-29 09:36:14.688431591 +0000 UTC m=+17.390866240"
	Oct 29 09:36:19 embed-certs-946178 kubelet[781]: I1029 09:36:19.666680     781 scope.go:117] "RemoveContainer" containerID="690bc808c1df69bacafd8bc271e7c7ed1945b19db88481aef2d495e12b1502d6"
	Oct 29 09:36:20 embed-certs-946178 kubelet[781]: I1029 09:36:20.671612     781 scope.go:117] "RemoveContainer" containerID="cad9e36a641d17e73d7cc7201f5f4dda3e85d700e57ff08424a8eb5b89f05dbc"
	Oct 29 09:36:20 embed-certs-946178 kubelet[781]: I1029 09:36:20.672396     781 scope.go:117] "RemoveContainer" containerID="690bc808c1df69bacafd8bc271e7c7ed1945b19db88481aef2d495e12b1502d6"
	Oct 29 09:36:20 embed-certs-946178 kubelet[781]: E1029 09:36:20.672861     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-r7gkw_kubernetes-dashboard(5d3df36e-b6d5-4cd8-9172-888defbe2de0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r7gkw" podUID="5d3df36e-b6d5-4cd8-9172-888defbe2de0"
	Oct 29 09:36:29 embed-certs-946178 kubelet[781]: I1029 09:36:29.048972     781 scope.go:117] "RemoveContainer" containerID="cad9e36a641d17e73d7cc7201f5f4dda3e85d700e57ff08424a8eb5b89f05dbc"
	Oct 29 09:36:29 embed-certs-946178 kubelet[781]: E1029 09:36:29.049232     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-r7gkw_kubernetes-dashboard(5d3df36e-b6d5-4cd8-9172-888defbe2de0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r7gkw" podUID="5d3df36e-b6d5-4cd8-9172-888defbe2de0"
	Oct 29 09:36:34 embed-certs-946178 kubelet[781]: I1029 09:36:34.712668     781 scope.go:117] "RemoveContainer" containerID="a923c0ed0d9882028afa7a7955c093bae07f06294b087c3eb1720d7f340d0274"
	Oct 29 09:36:42 embed-certs-946178 kubelet[781]: I1029 09:36:42.518016     781 scope.go:117] "RemoveContainer" containerID="cad9e36a641d17e73d7cc7201f5f4dda3e85d700e57ff08424a8eb5b89f05dbc"
	Oct 29 09:36:42 embed-certs-946178 kubelet[781]: I1029 09:36:42.736736     781 scope.go:117] "RemoveContainer" containerID="cad9e36a641d17e73d7cc7201f5f4dda3e85d700e57ff08424a8eb5b89f05dbc"
	Oct 29 09:36:42 embed-certs-946178 kubelet[781]: I1029 09:36:42.737161     781 scope.go:117] "RemoveContainer" containerID="e2218004159ca55db5cf8be4df666cd00910c74fa38f64de4c5b3bf57a9c52d1"
	Oct 29 09:36:42 embed-certs-946178 kubelet[781]: E1029 09:36:42.737314     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-r7gkw_kubernetes-dashboard(5d3df36e-b6d5-4cd8-9172-888defbe2de0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r7gkw" podUID="5d3df36e-b6d5-4cd8-9172-888defbe2de0"
	Oct 29 09:36:49 embed-certs-946178 kubelet[781]: I1029 09:36:49.048299     781 scope.go:117] "RemoveContainer" containerID="e2218004159ca55db5cf8be4df666cd00910c74fa38f64de4c5b3bf57a9c52d1"
	Oct 29 09:36:49 embed-certs-946178 kubelet[781]: E1029 09:36:49.049251     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-r7gkw_kubernetes-dashboard(5d3df36e-b6d5-4cd8-9172-888defbe2de0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r7gkw" podUID="5d3df36e-b6d5-4cd8-9172-888defbe2de0"
	Oct 29 09:36:52 embed-certs-946178 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 29 09:36:52 embed-certs-946178 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 29 09:36:52 embed-certs-946178 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [996e12d138170502140405ed35ffef95cddee211344908ec4df83911094c14ec] <==
	2025/10/29 09:36:14 Using namespace: kubernetes-dashboard
	2025/10/29 09:36:14 Using in-cluster config to connect to apiserver
	2025/10/29 09:36:14 Using secret token for csrf signing
	2025/10/29 09:36:14 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/29 09:36:14 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/29 09:36:14 Successful initial request to the apiserver, version: v1.34.1
	2025/10/29 09:36:14 Generating JWE encryption key
	2025/10/29 09:36:14 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/29 09:36:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/29 09:36:15 Initializing JWE encryption key from synchronized object
	2025/10/29 09:36:15 Creating in-cluster Sidecar client
	2025/10/29 09:36:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/29 09:36:15 Serving insecurely on HTTP port: 9090
	2025/10/29 09:36:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/29 09:36:14 Starting overwatch
	
	
	==> storage-provisioner [92f979d951b10a84254b437e918e99e627d64ede9c787c51501596b6a7d466f7] <==
	I1029 09:36:34.809061       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1029 09:36:34.822284       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1029 09:36:34.822454       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1029 09:36:34.825052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:38.288002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:42.549400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:46.147406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:49.201656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:52.223845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:52.228952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:36:52.229167       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1029 09:36:52.229353       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-946178_fa25ca3e-810b-4313-89a7-65a3c86046ac!
	I1029 09:36:52.232439       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e7498ccd-b53a-40e1-924d-4377223b536f", APIVersion:"v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-946178_fa25ca3e-810b-4313-89a7-65a3c86046ac became leader
	W1029 09:36:52.238032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:52.246580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:36:52.332506       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-946178_fa25ca3e-810b-4313-89a7-65a3c86046ac!
	W1029 09:36:54.250590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:54.264539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:56.268672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:56.275460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:58.279001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:36:58.293351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [a923c0ed0d9882028afa7a7955c093bae07f06294b087c3eb1720d7f340d0274] <==
	I1029 09:36:04.043326       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1029 09:36:34.046737       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-946178 -n embed-certs-946178
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-946178 -n embed-certs-946178: exit status 2 (440.363819ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-946178 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-194729 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-194729 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (284.920735ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:37:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-194729 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-194729
helpers_test.go:243: (dbg) docker inspect newest-cni-194729:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e7978179791b35aa9070cf8de2c6631cc56026581af856b7eef35f6a16a4fbd5",
	        "Created": "2025-10-29T09:37:08.716458695Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 206708,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:37:08.796653789Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/e7978179791b35aa9070cf8de2c6631cc56026581af856b7eef35f6a16a4fbd5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e7978179791b35aa9070cf8de2c6631cc56026581af856b7eef35f6a16a4fbd5/hostname",
	        "HostsPath": "/var/lib/docker/containers/e7978179791b35aa9070cf8de2c6631cc56026581af856b7eef35f6a16a4fbd5/hosts",
	        "LogPath": "/var/lib/docker/containers/e7978179791b35aa9070cf8de2c6631cc56026581af856b7eef35f6a16a4fbd5/e7978179791b35aa9070cf8de2c6631cc56026581af856b7eef35f6a16a4fbd5-json.log",
	        "Name": "/newest-cni-194729",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-194729:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-194729",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e7978179791b35aa9070cf8de2c6631cc56026581af856b7eef35f6a16a4fbd5",
	                "LowerDir": "/var/lib/docker/overlay2/5be8778fd3b9df93f0aa895218759f9aececd5c735bf336573191d9256f2e0be-init/diff:/var/lib/docker/overlay2/512c003c31c6c889d7180677d0a7d04f9641d651bb7c0f829b95ca7e47c2836c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5be8778fd3b9df93f0aa895218759f9aececd5c735bf336573191d9256f2e0be/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5be8778fd3b9df93f0aa895218759f9aececd5c735bf336573191d9256f2e0be/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5be8778fd3b9df93f0aa895218759f9aececd5c735bf336573191d9256f2e0be/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-194729",
	                "Source": "/var/lib/docker/volumes/newest-cni-194729/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-194729",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-194729",
	                "name.minikube.sigs.k8s.io": "newest-cni-194729",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "aecdb9d1521dd65cfc4073df36484b990fd51a9a2aae76ff51a561c1bc7b7ac7",
	            "SandboxKey": "/var/run/docker/netns/aecdb9d1521d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-194729": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:db:cd:a1:a5:7f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8f4995a956f2110ae36f130adfccc0f659ca020749dec44d4be9fd100beca009",
	                    "EndpointID": "b1da7279ec31d506b826e6496ca61b165cd20de8472dde335a246fab3899c518",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-194729",
	                        "e7978179791b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-194729 -n newest-cni-194729
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-194729 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-194729 logs -n 25: (1.131204457s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-162751                                                                                                                                                                                                                     │ old-k8s-version-162751       │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ delete  │ -p old-k8s-version-162751                                                                                                                                                                                                                     │ old-k8s-version-162751       │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ start   │ -p no-preload-505993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:35 UTC │
	│ delete  │ -p cert-expiration-690444                                                                                                                                                                                                                     │ cert-expiration-690444       │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:33 UTC │
	│ start   │ -p embed-certs-946178 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:33 UTC │ 29 Oct 25 09:35 UTC │
	│ addons  │ enable metrics-server -p no-preload-505993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │                     │
	│ stop    │ -p no-preload-505993 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:35 UTC │
	│ addons  │ enable dashboard -p no-preload-505993 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:35 UTC │
	│ start   │ -p no-preload-505993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-946178 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │                     │
	│ stop    │ -p embed-certs-946178 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-946178 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:35 UTC │
	│ start   │ -p embed-certs-946178 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:36 UTC │
	│ image   │ no-preload-505993 image list --format=json                                                                                                                                                                                                    │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ pause   │ -p no-preload-505993 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │                     │
	│ delete  │ -p no-preload-505993                                                                                                                                                                                                                          │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ delete  │ -p no-preload-505993                                                                                                                                                                                                                          │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ delete  │ -p disable-driver-mounts-012564                                                                                                                                                                                                               │ disable-driver-mounts-012564 │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ start   │ -p default-k8s-diff-port-154565 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-154565 │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │                     │
	│ image   │ embed-certs-946178 image list --format=json                                                                                                                                                                                                   │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ pause   │ -p embed-certs-946178 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │                     │
	│ delete  │ -p embed-certs-946178                                                                                                                                                                                                                         │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:37 UTC │
	│ delete  │ -p embed-certs-946178                                                                                                                                                                                                                         │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:37 UTC │ 29 Oct 25 09:37 UTC │
	│ start   │ -p newest-cni-194729 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:37 UTC │ 29 Oct 25 09:37 UTC │
	│ addons  │ enable metrics-server -p newest-cni-194729 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:37:02
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:37:02.411838  206213 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:37:02.411951  206213 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:37:02.411995  206213 out.go:374] Setting ErrFile to fd 2...
	I1029 09:37:02.412002  206213 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:37:02.412249  206213 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 09:37:02.412687  206213 out.go:368] Setting JSON to false
	I1029 09:37:02.413561  206213 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4774,"bootTime":1761725848,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1029 09:37:02.413627  206213 start.go:143] virtualization:  
	I1029 09:37:02.417903  206213 out.go:179] * [newest-cni-194729] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1029 09:37:02.421529  206213 notify.go:221] Checking for updates...
	I1029 09:37:02.426209  206213 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:37:02.429630  206213 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:37:02.433179  206213 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:37:02.436972  206213 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	I1029 09:37:02.440162  206213 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1029 09:37:02.443277  206213 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:37:02.447013  206213 config.go:182] Loaded profile config "default-k8s-diff-port-154565": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:37:02.447260  206213 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:37:02.497990  206213 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1029 09:37:02.498105  206213 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:37:02.608670  206213 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-29 09:37:02.599138008 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:37:02.608782  206213 docker.go:319] overlay module found
	I1029 09:37:02.612106  206213 out.go:179] * Using the docker driver based on user configuration
	I1029 09:37:02.615054  206213 start.go:309] selected driver: docker
	I1029 09:37:02.615073  206213 start.go:930] validating driver "docker" against <nil>
	I1029 09:37:02.615087  206213 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:37:02.615849  206213 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:37:02.704299  206213 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-29 09:37:02.694893166 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:37:02.704466  206213 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1029 09:37:02.704499  206213 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1029 09:37:02.704733  206213 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1029 09:37:02.707388  206213 out.go:179] * Using Docker driver with root privileges
	I1029 09:37:02.710409  206213 cni.go:84] Creating CNI manager for ""
	I1029 09:37:02.710470  206213 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:37:02.710478  206213 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1029 09:37:02.710552  206213 start.go:353] cluster config:
	{Name:newest-cni-194729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-194729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:37:02.713611  206213 out.go:179] * Starting "newest-cni-194729" primary control-plane node in "newest-cni-194729" cluster
	I1029 09:37:02.716370  206213 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:37:02.719325  206213 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:37:02.722124  206213 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:37:02.722171  206213 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1029 09:37:02.722191  206213 cache.go:59] Caching tarball of preloaded images
	I1029 09:37:02.722284  206213 preload.go:233] Found /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1029 09:37:02.722294  206213 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 09:37:02.722399  206213 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/config.json ...
	I1029 09:37:02.722416  206213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/config.json: {Name:mk719a06c57f13e65dc4261461633847711ca324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:37:02.722560  206213 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:37:02.745393  206213 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:37:02.745412  206213 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:37:02.745425  206213 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:37:02.745446  206213 start.go:360] acquireMachinesLock for newest-cni-194729: {Name:mkd3ffc0a88229da12feec44aaf76435e580410c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:37:02.745545  206213 start.go:364] duration metric: took 83.439µs to acquireMachinesLock for "newest-cni-194729"
	I1029 09:37:02.745569  206213 start.go:93] Provisioning new machine with config: &{Name:newest-cni-194729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-194729 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:37:02.745660  206213 start.go:125] createHost starting for "" (driver="docker")
	I1029 09:37:02.239406  202937 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1029 09:37:02.239561  202937 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-154565 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1029 09:37:02.471277  202937 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1029 09:37:02.471515  202937 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-154565 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1029 09:37:02.948424  202937 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1029 09:37:03.620159  202937 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1029 09:37:04.851525  202937 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1029 09:37:04.852063  202937 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1029 09:37:05.203099  202937 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1029 09:37:05.921685  202937 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1029 09:37:06.142673  202937 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1029 09:37:06.308351  202937 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1029 09:37:06.552996  202937 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1029 09:37:06.554084  202937 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1029 09:37:06.557013  202937 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1029 09:37:06.576193  202937 out.go:252]   - Booting up control plane ...
	I1029 09:37:06.576346  202937 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1029 09:37:06.576435  202937 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1029 09:37:06.576526  202937 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1029 09:37:06.617725  202937 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1029 09:37:06.617836  202937 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1029 09:37:06.634709  202937 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1029 09:37:06.634812  202937 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1029 09:37:06.634853  202937 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1029 09:37:06.830919  202937 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1029 09:37:06.831045  202937 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1029 09:37:02.749041  206213 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1029 09:37:02.749271  206213 start.go:159] libmachine.API.Create for "newest-cni-194729" (driver="docker")
	I1029 09:37:02.749306  206213 client.go:173] LocalClient.Create starting
	I1029 09:37:02.749369  206213 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem
	I1029 09:37:02.749406  206213 main.go:143] libmachine: Decoding PEM data...
	I1029 09:37:02.749419  206213 main.go:143] libmachine: Parsing certificate...
	I1029 09:37:02.749473  206213 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem
	I1029 09:37:02.749489  206213 main.go:143] libmachine: Decoding PEM data...
	I1029 09:37:02.749499  206213 main.go:143] libmachine: Parsing certificate...
	I1029 09:37:02.749865  206213 cli_runner.go:164] Run: docker network inspect newest-cni-194729 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1029 09:37:02.765821  206213 cli_runner.go:211] docker network inspect newest-cni-194729 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1029 09:37:02.765906  206213 network_create.go:284] running [docker network inspect newest-cni-194729] to gather additional debugging logs...
	I1029 09:37:02.765928  206213 cli_runner.go:164] Run: docker network inspect newest-cni-194729
	W1029 09:37:02.781857  206213 cli_runner.go:211] docker network inspect newest-cni-194729 returned with exit code 1
	I1029 09:37:02.781883  206213 network_create.go:287] error running [docker network inspect newest-cni-194729]: docker network inspect newest-cni-194729: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-194729 not found
	I1029 09:37:02.781897  206213 network_create.go:289] output of [docker network inspect newest-cni-194729]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-194729 not found
	
	** /stderr **
	I1029 09:37:02.781985  206213 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:37:02.798376  206213 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0687088684ea IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8e:e2:78:39:db:9c} reservation:<nil>}
	I1029 09:37:02.798710  206213 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b2a2304196dd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8e:c9:a9:e0:d0:7a} reservation:<nil>}
	I1029 09:37:02.799030  206213 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e863a0178057 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:86:09:fc:5e:55} reservation:<nil>}
	I1029 09:37:02.799265  206213 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c3acff3dac19 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:76:94:16:18:e5:62} reservation:<nil>}
	I1029 09:37:02.799653  206213 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a13210}
	I1029 09:37:02.799670  206213 network_create.go:124] attempt to create docker network newest-cni-194729 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1029 09:37:02.799729  206213 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-194729 newest-cni-194729
	I1029 09:37:02.864932  206213 network_create.go:108] docker network newest-cni-194729 192.168.85.0/24 created
	I1029 09:37:02.864961  206213 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-194729" container
	I1029 09:37:02.865038  206213 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1029 09:37:02.881918  206213 cli_runner.go:164] Run: docker volume create newest-cni-194729 --label name.minikube.sigs.k8s.io=newest-cni-194729 --label created_by.minikube.sigs.k8s.io=true
	I1029 09:37:02.922008  206213 oci.go:103] Successfully created a docker volume newest-cni-194729
	I1029 09:37:02.922086  206213 cli_runner.go:164] Run: docker run --rm --name newest-cni-194729-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-194729 --entrypoint /usr/bin/test -v newest-cni-194729:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1029 09:37:03.585618  206213 oci.go:107] Successfully prepared a docker volume newest-cni-194729
	I1029 09:37:03.585672  206213 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:37:03.585691  206213 kic.go:194] Starting extracting preloaded images to volume ...
	I1029 09:37:03.585758  206213 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-194729:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1029 09:37:08.332528  202937 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.50197358s
	I1029 09:37:08.337593  202937 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1029 09:37:08.337695  202937 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1029 09:37:08.338027  202937 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1029 09:37:08.338123  202937 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1029 09:37:08.637297  206213 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-194729:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.051499801s)
	I1029 09:37:08.637329  206213 kic.go:203] duration metric: took 5.051634703s to extract preloaded images to volume ...
	W1029 09:37:08.637486  206213 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1029 09:37:08.637624  206213 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1029 09:37:08.694478  206213 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-194729 --name newest-cni-194729 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-194729 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-194729 --network newest-cni-194729 --ip 192.168.85.2 --volume newest-cni-194729:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1029 09:37:09.161323  206213 cli_runner.go:164] Run: docker container inspect newest-cni-194729 --format={{.State.Running}}
	I1029 09:37:09.191397  206213 cli_runner.go:164] Run: docker container inspect newest-cni-194729 --format={{.State.Status}}
	I1029 09:37:09.219814  206213 cli_runner.go:164] Run: docker exec newest-cni-194729 stat /var/lib/dpkg/alternatives/iptables
	I1029 09:37:09.294429  206213 oci.go:144] the created container "newest-cni-194729" has a running status.
	I1029 09:37:09.294458  206213 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/newest-cni-194729/id_rsa...
	I1029 09:37:09.514029  206213 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21800-2763/.minikube/machines/newest-cni-194729/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1029 09:37:09.541987  206213 cli_runner.go:164] Run: docker container inspect newest-cni-194729 --format={{.State.Status}}
	I1029 09:37:09.569334  206213 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1029 09:37:09.569357  206213 kic_runner.go:114] Args: [docker exec --privileged newest-cni-194729 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1029 09:37:09.632789  206213 cli_runner.go:164] Run: docker container inspect newest-cni-194729 --format={{.State.Status}}
	I1029 09:37:09.668502  206213 machine.go:94] provisionDockerMachine start ...
	I1029 09:37:09.668591  206213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:09.697662  206213 main.go:143] libmachine: Using SSH client type: native
	I1029 09:37:09.697994  206213 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1029 09:37:09.698011  206213 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 09:37:09.698607  206213 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56824->127.0.0.1:33078: read: connection reset by peer
	I1029 09:37:12.976734  202937 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.63863272s
	I1029 09:37:15.675027  202937 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 7.337446077s
	I1029 09:37:16.340661  202937 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.002204511s
	I1029 09:37:16.361534  202937 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1029 09:37:16.378267  202937 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1029 09:37:16.398076  202937 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1029 09:37:16.398315  202937 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-154565 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1029 09:37:16.418088  202937 kubeadm.go:319] [bootstrap-token] Using token: m82mfn.r4a3vjvf2yvti3ja
	I1029 09:37:12.892094  206213 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-194729
	
	I1029 09:37:12.892178  206213 ubuntu.go:182] provisioning hostname "newest-cni-194729"
	I1029 09:37:12.892273  206213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:12.938984  206213 main.go:143] libmachine: Using SSH client type: native
	I1029 09:37:12.939304  206213 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1029 09:37:12.939316  206213 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-194729 && echo "newest-cni-194729" | sudo tee /etc/hostname
	I1029 09:37:13.156770  206213 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-194729
	
	I1029 09:37:13.156861  206213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:13.183590  206213 main.go:143] libmachine: Using SSH client type: native
	I1029 09:37:13.183895  206213 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1029 09:37:13.183916  206213 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-194729' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-194729/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-194729' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 09:37:13.368265  206213 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 09:37:13.368344  206213 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-2763/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-2763/.minikube}
	I1029 09:37:13.368431  206213 ubuntu.go:190] setting up certificates
	I1029 09:37:13.368460  206213 provision.go:84] configureAuth start
	I1029 09:37:13.368558  206213 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-194729
	I1029 09:37:13.418371  206213 provision.go:143] copyHostCerts
	I1029 09:37:13.418432  206213 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem, removing ...
	I1029 09:37:13.418441  206213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 09:37:13.418518  206213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem (1679 bytes)
	I1029 09:37:13.418613  206213 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem, removing ...
	I1029 09:37:13.418619  206213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 09:37:13.418645  206213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem (1082 bytes)
	I1029 09:37:13.418701  206213 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem, removing ...
	I1029 09:37:13.418706  206213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 09:37:13.418728  206213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem (1123 bytes)
	I1029 09:37:13.418780  206213 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem org=jenkins.newest-cni-194729 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-194729]
	I1029 09:37:14.477908  206213 provision.go:177] copyRemoteCerts
	I1029 09:37:14.478034  206213 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 09:37:14.478094  206213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:14.511430  206213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/newest-cni-194729/id_rsa Username:docker}
	I1029 09:37:14.638322  206213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 09:37:14.667109  206213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1029 09:37:14.698902  206213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 09:37:14.729947  206213 provision.go:87] duration metric: took 1.361461067s to configureAuth
	I1029 09:37:14.729975  206213 ubuntu.go:206] setting minikube options for container-runtime
	I1029 09:37:14.730161  206213 config.go:182] Loaded profile config "newest-cni-194729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:37:14.730268  206213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:14.760563  206213 main.go:143] libmachine: Using SSH client type: native
	I1029 09:37:14.760877  206213 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1029 09:37:14.760901  206213 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 09:37:15.108941  206213 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 09:37:15.108967  206213 machine.go:97] duration metric: took 5.440439509s to provisionDockerMachine
	I1029 09:37:15.108977  206213 client.go:176] duration metric: took 12.359664748s to LocalClient.Create
	I1029 09:37:15.108990  206213 start.go:167] duration metric: took 12.359721315s to libmachine.API.Create "newest-cni-194729"
	I1029 09:37:15.108998  206213 start.go:293] postStartSetup for "newest-cni-194729" (driver="docker")
	I1029 09:37:15.109008  206213 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 09:37:15.109075  206213 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 09:37:15.109122  206213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:15.140570  206213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/newest-cni-194729/id_rsa Username:docker}
	I1029 09:37:15.262652  206213 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 09:37:15.266785  206213 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 09:37:15.266810  206213 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 09:37:15.266820  206213 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/addons for local assets ...
	I1029 09:37:15.266874  206213 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/files for local assets ...
	I1029 09:37:15.266952  206213 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> 45502.pem in /etc/ssl/certs
	I1029 09:37:15.267058  206213 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 09:37:15.278105  206213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /etc/ssl/certs/45502.pem (1708 bytes)
	I1029 09:37:15.302145  206213 start.go:296] duration metric: took 193.132423ms for postStartSetup
	I1029 09:37:15.302619  206213 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-194729
	I1029 09:37:15.329263  206213 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/config.json ...
	I1029 09:37:15.329518  206213 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 09:37:15.329560  206213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:15.369798  206213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/newest-cni-194729/id_rsa Username:docker}
	I1029 09:37:15.480341  206213 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 09:37:15.485486  206213 start.go:128] duration metric: took 12.739810254s to createHost
	I1029 09:37:15.485507  206213 start.go:83] releasing machines lock for "newest-cni-194729", held for 12.739953894s
	I1029 09:37:15.485574  206213 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-194729
	I1029 09:37:15.508798  206213 ssh_runner.go:195] Run: cat /version.json
	I1029 09:37:15.508854  206213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:15.509657  206213 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 09:37:15.509718  206213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:15.546105  206213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/newest-cni-194729/id_rsa Username:docker}
	I1029 09:37:15.563043  206213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/newest-cni-194729/id_rsa Username:docker}
	I1029 09:37:15.660110  206213 ssh_runner.go:195] Run: systemctl --version
	I1029 09:37:15.768084  206213 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 09:37:15.819068  206213 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 09:37:15.823818  206213 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 09:37:15.823934  206213 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 09:37:15.855677  206213 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1029 09:37:15.855758  206213 start.go:496] detecting cgroup driver to use...
	I1029 09:37:15.855821  206213 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1029 09:37:15.855886  206213 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 09:37:15.875643  206213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 09:37:15.889502  206213 docker.go:218] disabling cri-docker service (if available) ...
	I1029 09:37:15.889569  206213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 09:37:15.908457  206213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 09:37:15.929823  206213 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 09:37:16.061884  206213 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 09:37:16.198094  206213 docker.go:234] disabling docker service ...
	I1029 09:37:16.198159  206213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 09:37:16.219139  206213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 09:37:16.234100  206213 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 09:37:16.352519  206213 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 09:37:16.513086  206213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 09:37:16.526602  206213 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 09:37:16.541228  206213 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 09:37:16.541336  206213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:37:16.555851  206213 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1029 09:37:16.555994  206213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:37:16.564822  206213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:37:16.574190  206213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:37:16.583087  206213 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 09:37:16.591222  206213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:37:16.600956  206213 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:37:16.614691  206213 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:37:16.624212  206213 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 09:37:16.632977  206213 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 09:37:16.640755  206213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:37:16.767376  206213 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 09:37:16.935539  206213 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 09:37:16.935667  206213 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 09:37:16.943308  206213 start.go:564] Will wait 60s for crictl version
	I1029 09:37:16.943421  206213 ssh_runner.go:195] Run: which crictl
	I1029 09:37:16.948026  206213 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 09:37:16.983858  206213 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 09:37:16.984002  206213 ssh_runner.go:195] Run: crio --version
	I1029 09:37:17.019151  206213 ssh_runner.go:195] Run: crio --version
	I1029 09:37:17.061908  206213 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 09:37:17.064966  206213 cli_runner.go:164] Run: docker network inspect newest-cni-194729 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:37:17.090983  206213 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1029 09:37:17.094709  206213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:37:17.114512  206213 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1029 09:37:16.421021  202937 out.go:252]   - Configuring RBAC rules ...
	I1029 09:37:16.421148  202937 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1029 09:37:16.431293  202937 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1029 09:37:16.449827  202937 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1029 09:37:16.455080  202937 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1029 09:37:16.465745  202937 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1029 09:37:16.470531  202937 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1029 09:37:16.747942  202937 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1029 09:37:17.117507  206213 kubeadm.go:884] updating cluster {Name:newest-cni-194729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-194729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 09:37:17.117634  206213 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:37:17.117710  206213 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:37:17.173218  206213 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:37:17.173239  206213 crio.go:433] Images already preloaded, skipping extraction
	I1029 09:37:17.173293  206213 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:37:17.221914  206213 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:37:17.221940  206213 cache_images.go:86] Images are preloaded, skipping loading
	I1029 09:37:17.221949  206213 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1029 09:37:17.222036  206213 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-194729 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-194729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 09:37:17.222126  206213 ssh_runner.go:195] Run: crio config
	I1029 09:37:17.334700  206213 cni.go:84] Creating CNI manager for ""
	I1029 09:37:17.334719  206213 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:37:17.334736  206213 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1029 09:37:17.334759  206213 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-194729 NodeName:newest-cni-194729 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 09:37:17.334881  206213 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-194729"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 09:37:17.334955  206213 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 09:37:17.345860  206213 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 09:37:17.345971  206213 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 09:37:17.354789  206213 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1029 09:37:17.375353  206213 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 09:37:17.398018  206213 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1029 09:37:17.258933  202937 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1029 09:37:17.767001  202937 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1029 09:37:17.768762  202937 kubeadm.go:319] 
	I1029 09:37:17.768846  202937 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1029 09:37:17.768853  202937 kubeadm.go:319] 
	I1029 09:37:17.768934  202937 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1029 09:37:17.768939  202937 kubeadm.go:319] 
	I1029 09:37:17.768974  202937 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1029 09:37:17.769455  202937 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1029 09:37:17.769514  202937 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1029 09:37:17.769519  202937 kubeadm.go:319] 
	I1029 09:37:17.769584  202937 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1029 09:37:17.769590  202937 kubeadm.go:319] 
	I1029 09:37:17.769639  202937 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1029 09:37:17.769644  202937 kubeadm.go:319] 
	I1029 09:37:17.769712  202937 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1029 09:37:17.769792  202937 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1029 09:37:17.769863  202937 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1029 09:37:17.769867  202937 kubeadm.go:319] 
	I1029 09:37:17.770180  202937 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1029 09:37:17.770266  202937 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1029 09:37:17.770272  202937 kubeadm.go:319] 
	I1029 09:37:17.770591  202937 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token m82mfn.r4a3vjvf2yvti3ja \
	I1029 09:37:17.770711  202937 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da4a5b90580f0f492e24f667f5676cec258425f736b389045aee440db981859e \
	I1029 09:37:17.770954  202937 kubeadm.go:319] 	--control-plane 
	I1029 09:37:17.770964  202937 kubeadm.go:319] 
	I1029 09:37:17.771297  202937 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1029 09:37:17.771306  202937 kubeadm.go:319] 
	I1029 09:37:17.771611  202937 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token m82mfn.r4a3vjvf2yvti3ja \
	I1029 09:37:17.771793  202937 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da4a5b90580f0f492e24f667f5676cec258425f736b389045aee440db981859e 
	I1029 09:37:17.781242  202937 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1029 09:37:17.781485  202937 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1029 09:37:17.781594  202937 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1029 09:37:17.781609  202937 cni.go:84] Creating CNI manager for ""
	I1029 09:37:17.781617  202937 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:37:17.785994  202937 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1029 09:37:17.418311  206213 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1029 09:37:17.424488  206213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:37:17.435590  206213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:37:17.594481  206213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:37:17.622444  206213 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729 for IP: 192.168.85.2
	I1029 09:37:17.622480  206213 certs.go:195] generating shared ca certs ...
	I1029 09:37:17.622497  206213 certs.go:227] acquiring lock for ca certs: {Name:mk715611293338c39436fe9072c5ce0c2fb993a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:37:17.622670  206213 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key
	I1029 09:37:17.622744  206213 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key
	I1029 09:37:17.622757  206213 certs.go:257] generating profile certs ...
	I1029 09:37:17.622821  206213 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/client.key
	I1029 09:37:17.622839  206213 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/client.crt with IP's: []
	I1029 09:37:18.720769  206213 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/client.crt ...
	I1029 09:37:18.720841  206213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/client.crt: {Name:mk82287502da812433d191a21de1d93ed53a35e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:37:18.721064  206213 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/client.key ...
	I1029 09:37:18.721101  206213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/client.key: {Name:mk16370c4adcecf6e7035c08190bff6574383d20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:37:18.721237  206213 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/apiserver.key.f97f549a
	I1029 09:37:18.721280  206213 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/apiserver.crt.f97f549a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1029 09:37:19.084663  206213 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/apiserver.crt.f97f549a ...
	I1029 09:37:19.084698  206213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/apiserver.crt.f97f549a: {Name:mkc07b3857e11c20f7d202fcc3792f7211979cf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:37:19.084890  206213 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/apiserver.key.f97f549a ...
	I1029 09:37:19.084905  206213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/apiserver.key.f97f549a: {Name:mk00985576522d3e32b39e54e1a758883b3b6337 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:37:19.084990  206213 certs.go:382] copying /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/apiserver.crt.f97f549a -> /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/apiserver.crt
	I1029 09:37:19.085070  206213 certs.go:386] copying /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/apiserver.key.f97f549a -> /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/apiserver.key
	I1029 09:37:19.085138  206213 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/proxy-client.key
	I1029 09:37:19.085158  206213 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/proxy-client.crt with IP's: []
	I1029 09:37:19.769556  206213 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/proxy-client.crt ...
	I1029 09:37:19.769589  206213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/proxy-client.crt: {Name:mke93538b9d7b012c188117b9664c6d047c0eaa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:37:19.769793  206213 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/proxy-client.key ...
	I1029 09:37:19.769811  206213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/proxy-client.key: {Name:mk9fe9ee2867433bb1408eb3c2bf2a3e7bb16cbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:37:19.770016  206213 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem (1338 bytes)
	W1029 09:37:19.770061  206213 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550_empty.pem, impossibly tiny 0 bytes
	I1029 09:37:19.770079  206213 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem (1679 bytes)
	I1029 09:37:19.770103  206213 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem (1082 bytes)
	I1029 09:37:19.770130  206213 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem (1123 bytes)
	I1029 09:37:19.770157  206213 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem (1679 bytes)
	I1029 09:37:19.770207  206213 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem (1708 bytes)
	I1029 09:37:19.770759  206213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 09:37:19.793288  206213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 09:37:19.820731  206213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 09:37:19.846962  206213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1029 09:37:19.871099  206213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1029 09:37:19.905226  206213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 09:37:19.927306  206213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 09:37:19.950964  206213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1029 09:37:19.970223  206213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 09:37:19.988364  206213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem --> /usr/share/ca-certificates/4550.pem (1338 bytes)
	I1029 09:37:20.018538  206213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /usr/share/ca-certificates/45502.pem (1708 bytes)
	I1029 09:37:20.037658  206213 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 09:37:20.052209  206213 ssh_runner.go:195] Run: openssl version
	I1029 09:37:20.058918  206213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 09:37:20.067404  206213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:37:20.071217  206213 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:37:20.071320  206213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:37:20.118872  206213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 09:37:20.131473  206213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4550.pem && ln -fs /usr/share/ca-certificates/4550.pem /etc/ssl/certs/4550.pem"
	I1029 09:37:20.139761  206213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4550.pem
	I1029 09:37:20.145029  206213 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:27 /usr/share/ca-certificates/4550.pem
	I1029 09:37:20.145093  206213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4550.pem
	I1029 09:37:20.194935  206213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4550.pem /etc/ssl/certs/51391683.0"
	I1029 09:37:20.203400  206213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/45502.pem && ln -fs /usr/share/ca-certificates/45502.pem /etc/ssl/certs/45502.pem"
	I1029 09:37:20.211455  206213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/45502.pem
	I1029 09:37:20.215032  206213 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:27 /usr/share/ca-certificates/45502.pem
	I1029 09:37:20.215098  206213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/45502.pem
	I1029 09:37:20.256368  206213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/45502.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 09:37:20.264681  206213 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 09:37:20.268166  206213 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1029 09:37:20.268219  206213 kubeadm.go:401] StartCluster: {Name:newest-cni-194729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-194729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:37:20.268300  206213 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 09:37:20.268407  206213 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 09:37:20.299309  206213 cri.go:89] found id: ""
	I1029 09:37:20.299426  206213 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 09:37:20.311089  206213 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1029 09:37:20.319486  206213 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1029 09:37:20.319556  206213 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1029 09:37:20.330470  206213 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1029 09:37:20.330499  206213 kubeadm.go:158] found existing configuration files:
	
	I1029 09:37:20.330552  206213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1029 09:37:20.340950  206213 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1029 09:37:20.341024  206213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1029 09:37:20.351885  206213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1029 09:37:20.359947  206213 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1029 09:37:20.360018  206213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1029 09:37:20.367386  206213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1029 09:37:20.376199  206213 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1029 09:37:20.376270  206213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1029 09:37:20.383730  206213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1029 09:37:20.394136  206213 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1029 09:37:20.394211  206213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1029 09:37:20.406150  206213 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1029 09:37:20.453394  206213 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1029 09:37:20.453639  206213 kubeadm.go:319] [preflight] Running pre-flight checks
	I1029 09:37:20.486782  206213 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1029 09:37:20.486868  206213 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1029 09:37:20.486922  206213 kubeadm.go:319] OS: Linux
	I1029 09:37:20.486977  206213 kubeadm.go:319] CGROUPS_CPU: enabled
	I1029 09:37:20.487043  206213 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1029 09:37:20.487103  206213 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1029 09:37:20.487169  206213 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1029 09:37:20.487233  206213 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1029 09:37:20.487302  206213 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1029 09:37:20.487356  206213 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1029 09:37:20.487420  206213 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1029 09:37:20.487485  206213 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1029 09:37:20.564904  206213 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1029 09:37:20.565099  206213 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1029 09:37:20.565246  206213 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1029 09:37:20.573601  206213 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1029 09:37:17.788839  202937 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1029 09:37:17.801061  202937 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1029 09:37:17.801079  202937 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1029 09:37:17.834975  202937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1029 09:37:18.432386  202937 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1029 09:37:18.432502  202937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:37:18.432571  202937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-154565 minikube.k8s.io/updated_at=2025_10_29T09_37_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac minikube.k8s.io/name=default-k8s-diff-port-154565 minikube.k8s.io/primary=true
	I1029 09:37:18.802221  202937 ops.go:34] apiserver oom_adj: -16
	I1029 09:37:18.802326  202937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:37:19.303118  202937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:37:19.802690  202937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:37:20.303308  202937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:37:20.802549  202937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:37:21.303155  202937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:37:21.803076  202937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:37:20.578464  206213 out.go:252]   - Generating certificates and keys ...
	I1029 09:37:20.578589  206213 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1029 09:37:20.578671  206213 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1029 09:37:21.178209  206213 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1029 09:37:21.866601  206213 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1029 09:37:22.092025  206213 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1029 09:37:22.317452  206213 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1029 09:37:22.303141  202937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:37:22.457188  202937 kubeadm.go:1114] duration metric: took 4.024728003s to wait for elevateKubeSystemPrivileges
	I1029 09:37:22.457234  202937 kubeadm.go:403] duration metric: took 24.65650901s to StartCluster
	I1029 09:37:22.457251  202937 settings.go:142] acquiring lock: {Name:mkeba1c0d8e3656c2a05b6b6f1f81184498df216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:37:22.457313  202937 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:37:22.457977  202937 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:37:22.458196  202937 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:37:22.458358  202937 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1029 09:37:22.458626  202937 config.go:182] Loaded profile config "default-k8s-diff-port-154565": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:37:22.458662  202937 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 09:37:22.458725  202937 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-154565"
	I1029 09:37:22.458740  202937 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-154565"
	I1029 09:37:22.458761  202937 host.go:66] Checking if "default-k8s-diff-port-154565" exists ...
	I1029 09:37:22.459394  202937 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-154565 --format={{.State.Status}}
	I1029 09:37:22.459655  202937 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-154565"
	I1029 09:37:22.459689  202937 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-154565"
	I1029 09:37:22.459963  202937 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-154565 --format={{.State.Status}}
	I1029 09:37:22.466194  202937 out.go:179] * Verifying Kubernetes components...
	I1029 09:37:22.469158  202937 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:37:22.495898  202937 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 09:37:22.497373  202937 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-154565"
	I1029 09:37:22.497409  202937 host.go:66] Checking if "default-k8s-diff-port-154565" exists ...
	I1029 09:37:22.497828  202937 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-154565 --format={{.State.Status}}
	I1029 09:37:22.500092  202937 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:37:22.500119  202937 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1029 09:37:22.500189  202937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:37:22.537525  202937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/default-k8s-diff-port-154565/id_rsa Username:docker}
	I1029 09:37:22.550195  202937 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1029 09:37:22.550220  202937 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1029 09:37:22.550284  202937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:37:22.578049  202937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/default-k8s-diff-port-154565/id_rsa Username:docker}
	I1029 09:37:22.921145  202937 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:37:22.934658  202937 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1029 09:37:23.012016  202937 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1029 09:37:23.012131  202937 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:37:24.614531  202937 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.693352345s)
	I1029 09:37:24.614640  202937 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.679957336s)
	I1029 09:37:24.614711  202937 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.602549302s)
	I1029 09:37:24.615440  202937 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-154565" to be "Ready" ...
	I1029 09:37:24.615744  202937 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.603704964s)
	I1029 09:37:24.615768  202937 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1029 09:37:24.674725  202937 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1029 09:37:23.156881  206213 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1029 09:37:23.157470  206213 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-194729] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1029 09:37:23.319907  206213 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1029 09:37:23.320566  206213 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-194729] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1029 09:37:24.036741  206213 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1029 09:37:24.868623  206213 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1029 09:37:25.245956  206213 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1029 09:37:25.246201  206213 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1029 09:37:25.471646  206213 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1029 09:37:25.943164  206213 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1029 09:37:26.150943  206213 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1029 09:37:26.536159  206213 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1029 09:37:26.785442  206213 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1029 09:37:26.786238  206213 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1029 09:37:26.789181  206213 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1029 09:37:24.677694  202937 addons.go:515] duration metric: took 2.21901274s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1029 09:37:25.121863  202937 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-154565" context rescaled to 1 replicas
	W1029 09:37:26.619448  202937 node_ready.go:57] node "default-k8s-diff-port-154565" has "Ready":"False" status (will retry)
	I1029 09:37:26.792743  206213 out.go:252]   - Booting up control plane ...
	I1029 09:37:26.792846  206213 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1029 09:37:26.792923  206213 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1029 09:37:26.792989  206213 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1029 09:37:26.807861  206213 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1029 09:37:26.808104  206213 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1029 09:37:26.815755  206213 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1029 09:37:26.816411  206213 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1029 09:37:26.816733  206213 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1029 09:37:26.952159  206213 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1029 09:37:26.952352  206213 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1029 09:37:28.619906  202937 node_ready.go:57] node "default-k8s-diff-port-154565" has "Ready":"False" status (will retry)
	W1029 09:37:31.119155  202937 node_ready.go:57] node "default-k8s-diff-port-154565" has "Ready":"False" status (will retry)
	I1029 09:37:28.456620  206213 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500839816s
	I1029 09:37:28.456733  206213 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1029 09:37:28.456821  206213 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1029 09:37:28.456913  206213 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1029 09:37:28.457001  206213 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1029 09:37:31.032036  206213 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.575589535s
	I1029 09:37:32.640440  206213 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.184208846s
	I1029 09:37:34.457680  206213 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001444028s
	I1029 09:37:34.480537  206213 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1029 09:37:34.493229  206213 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1029 09:37:34.507897  206213 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1029 09:37:34.508164  206213 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-194729 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1029 09:37:34.520114  206213 kubeadm.go:319] [bootstrap-token] Using token: dc1c2i.ykxlosx02v9kctrd
	I1029 09:37:34.523054  206213 out.go:252]   - Configuring RBAC rules ...
	I1029 09:37:34.523182  206213 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1029 09:37:34.528902  206213 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1029 09:37:34.543474  206213 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1029 09:37:34.548066  206213 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1029 09:37:34.552265  206213 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1029 09:37:34.556671  206213 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1029 09:37:34.864768  206213 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1029 09:37:35.296365  206213 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1029 09:37:35.865177  206213 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1029 09:37:35.866639  206213 kubeadm.go:319] 
	I1029 09:37:35.866724  206213 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1029 09:37:35.866735  206213 kubeadm.go:319] 
	I1029 09:37:35.866817  206213 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1029 09:37:35.866828  206213 kubeadm.go:319] 
	I1029 09:37:35.866855  206213 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1029 09:37:35.866923  206213 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1029 09:37:35.866988  206213 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1029 09:37:35.867002  206213 kubeadm.go:319] 
	I1029 09:37:35.867058  206213 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1029 09:37:35.867072  206213 kubeadm.go:319] 
	I1029 09:37:35.867124  206213 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1029 09:37:35.867141  206213 kubeadm.go:319] 
	I1029 09:37:35.867196  206213 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1029 09:37:35.867291  206213 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1029 09:37:35.867369  206213 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1029 09:37:35.867379  206213 kubeadm.go:319] 
	I1029 09:37:35.867467  206213 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1029 09:37:35.867552  206213 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1029 09:37:35.867563  206213 kubeadm.go:319] 
	I1029 09:37:35.867652  206213 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token dc1c2i.ykxlosx02v9kctrd \
	I1029 09:37:35.867764  206213 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da4a5b90580f0f492e24f667f5676cec258425f736b389045aee440db981859e \
	I1029 09:37:35.867789  206213 kubeadm.go:319] 	--control-plane 
	I1029 09:37:35.867799  206213 kubeadm.go:319] 
	I1029 09:37:35.867887  206213 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1029 09:37:35.867899  206213 kubeadm.go:319] 
	I1029 09:37:35.867985  206213 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token dc1c2i.ykxlosx02v9kctrd \
	I1029 09:37:35.868095  206213 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da4a5b90580f0f492e24f667f5676cec258425f736b389045aee440db981859e 
	I1029 09:37:35.873147  206213 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1029 09:37:35.873373  206213 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1029 09:37:35.873506  206213 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1029 09:37:35.873531  206213 cni.go:84] Creating CNI manager for ""
	I1029 09:37:35.873539  206213 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:37:35.878630  206213 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1029 09:37:33.618318  202937 node_ready.go:57] node "default-k8s-diff-port-154565" has "Ready":"False" status (will retry)
	W1029 09:37:35.619097  202937 node_ready.go:57] node "default-k8s-diff-port-154565" has "Ready":"False" status (will retry)
	I1029 09:37:35.881539  206213 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1029 09:37:35.886222  206213 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1029 09:37:35.886246  206213 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1029 09:37:35.900915  206213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1029 09:37:36.258417  206213 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1029 09:37:36.258511  206213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:37:36.258575  206213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-194729 minikube.k8s.io/updated_at=2025_10_29T09_37_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac minikube.k8s.io/name=newest-cni-194729 minikube.k8s.io/primary=true
	I1029 09:37:36.457930  206213 ops.go:34] apiserver oom_adj: -16
	I1029 09:37:36.458043  206213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:37:36.958632  206213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:37:37.458229  206213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:37:37.958291  206213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:37:38.458134  206213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:37:38.958134  206213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:37:39.458162  206213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:37:39.958411  206213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:37:40.459090  206213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:37:40.584265  206213 kubeadm.go:1114] duration metric: took 4.325812052s to wait for elevateKubeSystemPrivileges
	I1029 09:37:40.584299  206213 kubeadm.go:403] duration metric: took 20.316082689s to StartCluster
	I1029 09:37:40.584341  206213 settings.go:142] acquiring lock: {Name:mkeba1c0d8e3656c2a05b6b6f1f81184498df216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:37:40.584406  206213 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:37:40.585368  206213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:37:40.585620  206213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1029 09:37:40.585647  206213 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 09:37:40.585628  206213 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:37:40.585898  206213 config.go:182] Loaded profile config "newest-cni-194729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:37:40.585952  206213 addons.go:70] Setting default-storageclass=true in profile "newest-cni-194729"
	I1029 09:37:40.585969  206213 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-194729"
	I1029 09:37:40.585715  206213 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-194729"
	I1029 09:37:40.586250  206213 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-194729"
	I1029 09:37:40.586270  206213 cli_runner.go:164] Run: docker container inspect newest-cni-194729 --format={{.State.Status}}
	I1029 09:37:40.586324  206213 host.go:66] Checking if "newest-cni-194729" exists ...
	I1029 09:37:40.586862  206213 cli_runner.go:164] Run: docker container inspect newest-cni-194729 --format={{.State.Status}}
	I1029 09:37:40.590230  206213 out.go:179] * Verifying Kubernetes components...
	I1029 09:37:40.593637  206213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:37:40.635773  206213 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 09:37:40.639844  206213 addons.go:239] Setting addon default-storageclass=true in "newest-cni-194729"
	I1029 09:37:40.639889  206213 host.go:66] Checking if "newest-cni-194729" exists ...
	I1029 09:37:40.640305  206213 cli_runner.go:164] Run: docker container inspect newest-cni-194729 --format={{.State.Status}}
	I1029 09:37:40.640496  206213 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:37:40.641763  206213 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1029 09:37:40.641816  206213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:40.667685  206213 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1029 09:37:40.667706  206213 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1029 09:37:40.667766  206213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:40.692055  206213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/newest-cni-194729/id_rsa Username:docker}
	I1029 09:37:40.719779  206213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/newest-cni-194729/id_rsa Username:docker}
	I1029 09:37:40.936551  206213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1029 09:37:40.936657  206213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:37:41.018615  206213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:37:41.050599  206213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1029 09:37:41.605714  206213 api_server.go:52] waiting for apiserver process to appear ...
	I1029 09:37:41.605830  206213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:37:41.607735  206213 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1029 09:37:41.960462  206213 api_server.go:72] duration metric: took 1.374721428s to wait for apiserver process to appear ...
	I1029 09:37:41.960498  206213 api_server.go:88] waiting for apiserver healthz status ...
	I1029 09:37:41.960529  206213 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:37:41.971067  206213 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1029 09:37:41.974251  206213 api_server.go:141] control plane version: v1.34.1
	I1029 09:37:41.974279  206213 api_server.go:131] duration metric: took 13.773187ms to wait for apiserver health ...
	I1029 09:37:41.974289  206213 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 09:37:41.977499  206213 system_pods.go:59] 8 kube-system pods found
	I1029 09:37:41.977548  206213 system_pods.go:61] "coredns-66bc5c9577-xw4k2" [16536d62-45b6-4dbb-a119-2f03bc0dab76] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1029 09:37:41.977560  206213 system_pods.go:61] "etcd-newest-cni-194729" [a73be25e-1001-47bc-a73e-81d0b4a407a5] Running
	I1029 09:37:41.977566  206213 system_pods.go:61] "kindnet-4qfvm" [aaa1a0aa-75fc-418d-b140-ffa0a0dfe864] Running
	I1029 09:37:41.977571  206213 system_pods.go:61] "kube-apiserver-newest-cni-194729" [ac1d73e9-32a0-47f5-9b54-d3e7441d00c8] Running
	I1029 09:37:41.977585  206213 system_pods.go:61] "kube-controller-manager-newest-cni-194729" [c77285b8-4ae6-4f1b-8552-2c45a600d458] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:37:41.977593  206213 system_pods.go:61] "kube-proxy-grr4p" [55f7dc3f-12ef-4f5e-a6ad-fe25dc8c11ad] Running
	I1029 09:37:41.977609  206213 system_pods.go:61] "kube-scheduler-newest-cni-194729" [189fc533-1dab-4e25-8187-da8f16a8a131] Running
	I1029 09:37:41.977620  206213 system_pods.go:61] "storage-provisioner" [a55079c0-4415-4c57-b3db-6c95a7876df1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1029 09:37:41.977627  206213 system_pods.go:74] duration metric: took 3.321457ms to wait for pod list to return data ...
	I1029 09:37:41.977640  206213 default_sa.go:34] waiting for default service account to be created ...
	I1029 09:37:41.980011  206213 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1029 09:37:37.619159  202937 node_ready.go:57] node "default-k8s-diff-port-154565" has "Ready":"False" status (will retry)
	W1029 09:37:40.118743  202937 node_ready.go:57] node "default-k8s-diff-port-154565" has "Ready":"False" status (will retry)
	W1029 09:37:42.118963  202937 node_ready.go:57] node "default-k8s-diff-port-154565" has "Ready":"False" status (will retry)
	I1029 09:37:41.980672  206213 default_sa.go:45] found service account: "default"
	I1029 09:37:41.980691  206213 default_sa.go:55] duration metric: took 3.045746ms for default service account to be created ...
	I1029 09:37:41.980705  206213 kubeadm.go:587] duration metric: took 1.394969076s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1029 09:37:41.980721  206213 node_conditions.go:102] verifying NodePressure condition ...
	I1029 09:37:41.982851  206213 addons.go:515] duration metric: took 1.397178744s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1029 09:37:41.989208  206213 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1029 09:37:41.989253  206213 node_conditions.go:123] node cpu capacity is 2
	I1029 09:37:41.989268  206213 node_conditions.go:105] duration metric: took 8.542075ms to run NodePressure ...
	I1029 09:37:41.989294  206213 start.go:242] waiting for startup goroutines ...
	I1029 09:37:42.114890  206213 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-194729" context rescaled to 1 replicas
	I1029 09:37:42.114981  206213 start.go:247] waiting for cluster config update ...
	I1029 09:37:42.115016  206213 start.go:256] writing updated cluster config ...
	I1029 09:37:42.115498  206213 ssh_runner.go:195] Run: rm -f paused
	I1029 09:37:42.199869  206213 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1029 09:37:42.202974  206213 out.go:179] * Done! kubectl is now configured to use "newest-cni-194729" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 29 09:37:41 newest-cni-194729 crio[838]: time="2025-10-29T09:37:41.122535011Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:37:41 newest-cni-194729 crio[838]: time="2025-10-29T09:37:41.126193448Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=fb0d8b8e-6702-4153-8b2b-3534cbe40732 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:37:41 newest-cni-194729 crio[838]: time="2025-10-29T09:37:41.136243089Z" level=info msg="Ran pod sandbox 24c0a2649ee6c3a4558d7afac597b0282e71a479491bb544e4cf60416e4cc682 with infra container: kube-system/kindnet-4qfvm/POD" id=fb0d8b8e-6702-4153-8b2b-3534cbe40732 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:37:41 newest-cni-194729 crio[838]: time="2025-10-29T09:37:41.137588857Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=078906a6-698a-443a-9c08-67aa428fa5a1 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:37:41 newest-cni-194729 crio[838]: time="2025-10-29T09:37:41.13887782Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=2e98c57f-bd89-4ce1-8b52-2f0b72068bac name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:37:41 newest-cni-194729 crio[838]: time="2025-10-29T09:37:41.146506106Z" level=info msg="Creating container: kube-system/kindnet-4qfvm/kindnet-cni" id=d54f7bd7-01b9-459e-a00e-ecc0738dd696 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:37:41 newest-cni-194729 crio[838]: time="2025-10-29T09:37:41.146800393Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:37:41 newest-cni-194729 crio[838]: time="2025-10-29T09:37:41.152099149Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:37:41 newest-cni-194729 crio[838]: time="2025-10-29T09:37:41.152813288Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:37:41 newest-cni-194729 crio[838]: time="2025-10-29T09:37:41.165210552Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-grr4p/POD" id=e2473291-b7a7-489d-9597-1ffc85ff6e15 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:37:41 newest-cni-194729 crio[838]: time="2025-10-29T09:37:41.166567775Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:37:41 newest-cni-194729 crio[838]: time="2025-10-29T09:37:41.170909943Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e2473291-b7a7-489d-9597-1ffc85ff6e15 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:37:41 newest-cni-194729 crio[838]: time="2025-10-29T09:37:41.178499107Z" level=info msg="Created container c8627366d3495d0cdc5a5144a515c767577c156a900e1b95c85148cf517b1b5f: kube-system/kindnet-4qfvm/kindnet-cni" id=d54f7bd7-01b9-459e-a00e-ecc0738dd696 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:37:41 newest-cni-194729 crio[838]: time="2025-10-29T09:37:41.181797261Z" level=info msg="Starting container: c8627366d3495d0cdc5a5144a515c767577c156a900e1b95c85148cf517b1b5f" id=414081fb-9f4b-4dc4-8a53-68e077d99d83 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:37:41 newest-cni-194729 crio[838]: time="2025-10-29T09:37:41.184233164Z" level=info msg="Ran pod sandbox 26e2b963afd1d605c1c2d27a5bc13b6bcc903981c3930294397c57703c5eb761 with infra container: kube-system/kube-proxy-grr4p/POD" id=e2473291-b7a7-489d-9597-1ffc85ff6e15 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:37:41 newest-cni-194729 crio[838]: time="2025-10-29T09:37:41.185408707Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=d3fecdc9-530e-473d-9103-84acf3e820a1 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:37:41 newest-cni-194729 crio[838]: time="2025-10-29T09:37:41.191212509Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=ff364491-50ef-49c4-934a-f2e0fd9ff8be name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:37:41 newest-cni-194729 crio[838]: time="2025-10-29T09:37:41.202010801Z" level=info msg="Started container" PID=1476 containerID=c8627366d3495d0cdc5a5144a515c767577c156a900e1b95c85148cf517b1b5f description=kube-system/kindnet-4qfvm/kindnet-cni id=414081fb-9f4b-4dc4-8a53-68e077d99d83 name=/runtime.v1.RuntimeService/StartContainer sandboxID=24c0a2649ee6c3a4558d7afac597b0282e71a479491bb544e4cf60416e4cc682
	Oct 29 09:37:41 newest-cni-194729 crio[838]: time="2025-10-29T09:37:41.202505238Z" level=info msg="Creating container: kube-system/kube-proxy-grr4p/kube-proxy" id=cc9c4c5c-a780-4d6a-b092-3399c21ee2ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:37:41 newest-cni-194729 crio[838]: time="2025-10-29T09:37:41.20266304Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:37:41 newest-cni-194729 crio[838]: time="2025-10-29T09:37:41.209167796Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:37:41 newest-cni-194729 crio[838]: time="2025-10-29T09:37:41.209851831Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:37:41 newest-cni-194729 crio[838]: time="2025-10-29T09:37:41.268694246Z" level=info msg="Created container d53319f7a8b8d0b8cfda47fe7c8445a05429413961e1bdef0e2d6b1f056fd94b: kube-system/kube-proxy-grr4p/kube-proxy" id=cc9c4c5c-a780-4d6a-b092-3399c21ee2ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:37:41 newest-cni-194729 crio[838]: time="2025-10-29T09:37:41.272644862Z" level=info msg="Starting container: d53319f7a8b8d0b8cfda47fe7c8445a05429413961e1bdef0e2d6b1f056fd94b" id=575bac44-4a79-4a75-aa39-9db9c730215d name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:37:41 newest-cni-194729 crio[838]: time="2025-10-29T09:37:41.27955123Z" level=info msg="Started container" PID=1489 containerID=d53319f7a8b8d0b8cfda47fe7c8445a05429413961e1bdef0e2d6b1f056fd94b description=kube-system/kube-proxy-grr4p/kube-proxy id=575bac44-4a79-4a75-aa39-9db9c730215d name=/runtime.v1.RuntimeService/StartContainer sandboxID=26e2b963afd1d605c1c2d27a5bc13b6bcc903981c3930294397c57703c5eb761
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d53319f7a8b8d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 seconds ago       Running             kube-proxy                0                   26e2b963afd1d       kube-proxy-grr4p                            kube-system
	c8627366d3495       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 seconds ago       Running             kindnet-cni               0                   24c0a2649ee6c       kindnet-4qfvm                               kube-system
	b765140c48086       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   14 seconds ago      Running             kube-controller-manager   0                   7a36f683bd356       kube-controller-manager-newest-cni-194729   kube-system
	5828bb139ab0c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   14 seconds ago      Running             kube-apiserver            0                   6f03ae1ed1033       kube-apiserver-newest-cni-194729            kube-system
	6912bd1748a50       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   14 seconds ago      Running             etcd                      0                   eb29c177274b6       etcd-newest-cni-194729                      kube-system
	5f9508d3d9e83       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   14 seconds ago      Running             kube-scheduler            0                   04ccb2ae831dd       kube-scheduler-newest-cni-194729            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-194729
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-194729
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=newest-cni-194729
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_37_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:37:32 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-194729
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:37:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:37:35 +0000   Wed, 29 Oct 2025 09:37:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:37:35 +0000   Wed, 29 Oct 2025 09:37:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:37:35 +0000   Wed, 29 Oct 2025 09:37:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 29 Oct 2025 09:37:35 +0000   Wed, 29 Oct 2025 09:37:28 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-194729
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                c2631ecb-f2d1-41a4-93ae-1b71955be2b7
	  Boot ID:                    dcb5a4aa-84b0-4edf-aeb7-a96ccf6c0882
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-194729                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8s
	  kube-system                 kindnet-4qfvm                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-194729             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-194729    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-grr4p                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-194729             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 1s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  15s (x8 over 15s)  kubelet          Node newest-cni-194729 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15s (x8 over 15s)  kubelet          Node newest-cni-194729 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15s (x8 over 15s)  kubelet          Node newest-cni-194729 status is now: NodeHasSufficientPID
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 8s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8s                 kubelet          Node newest-cni-194729 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s                 kubelet          Node newest-cni-194729 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s                 kubelet          Node newest-cni-194729 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-194729 event: Registered Node newest-cni-194729 in Controller
	
	
	==> dmesg <==
	[ +24.018500] overlayfs: idmapped layers are currently not supported
	[  +4.070732] overlayfs: idmapped layers are currently not supported
	[Oct29 09:11] overlayfs: idmapped layers are currently not supported
	[ +18.424492] overlayfs: idmapped layers are currently not supported
	[  +4.342269] hrtimer: interrupt took 2289025 ns
	[Oct29 09:12] overlayfs: idmapped layers are currently not supported
	[Oct29 09:13] overlayfs: idmapped layers are currently not supported
	[Oct29 09:14] overlayfs: idmapped layers are currently not supported
	[Oct29 09:20] overlayfs: idmapped layers are currently not supported
	[Oct29 09:23] overlayfs: idmapped layers are currently not supported
	[Oct29 09:24] overlayfs: idmapped layers are currently not supported
	[ +30.917844] overlayfs: idmapped layers are currently not supported
	[Oct29 09:27] overlayfs: idmapped layers are currently not supported
	[Oct29 09:29] overlayfs: idmapped layers are currently not supported
	[Oct29 09:30] overlayfs: idmapped layers are currently not supported
	[  +5.608805] overlayfs: idmapped layers are currently not supported
	[ +37.422429] overlayfs: idmapped layers are currently not supported
	[Oct29 09:31] overlayfs: idmapped layers are currently not supported
	[Oct29 09:32] overlayfs: idmapped layers are currently not supported
	[Oct29 09:34] overlayfs: idmapped layers are currently not supported
	[ +22.728709] overlayfs: idmapped layers are currently not supported
	[Oct29 09:35] overlayfs: idmapped layers are currently not supported
	[ +21.902387] overlayfs: idmapped layers are currently not supported
	[Oct29 09:37] overlayfs: idmapped layers are currently not supported
	[ +19.842209] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6912bd1748a5022c290cec13f2590e76fe2b5fda9360b6ac486a75573e635423] <==
	{"level":"warn","ts":"2025-10-29T09:37:31.424448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:31.440201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:31.458973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:31.479161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:31.496456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:31.526193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:31.535063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:31.550633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:31.566818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:31.582742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:31.598246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:31.613317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:31.630220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:31.643222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:31.658941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:31.677187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:31.692960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:31.709138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:31.723729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:31.739434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:31.759615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:31.788840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:31.809656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:31.832715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:31.887811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42098","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:37:43 up  1:20,  0 user,  load average: 5.01, 4.08, 3.06
	Linux newest-cni-194729 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c8627366d3495d0cdc5a5144a515c767577c156a900e1b95c85148cf517b1b5f] <==
	I1029 09:37:41.352238       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:37:41.352686       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1029 09:37:41.354939       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:37:41.354964       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:37:41.354979       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:37:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:37:41.549074       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:37:41.549093       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:37:41.549102       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:37:41.549217       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [5828bb139ab0c70b2c03ab7d0a763dce258c2cc85bf4b0332241b9733194411d] <==
	I1029 09:37:32.646487       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1029 09:37:32.646522       1 policy_source.go:240] refreshing policies
	I1029 09:37:32.688630       1 controller.go:667] quota admission added evaluator for: namespaces
	I1029 09:37:32.757563       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:37:32.763107       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1029 09:37:32.795505       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1029 09:37:32.795707       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:37:32.889487       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1029 09:37:33.445995       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1029 09:37:33.453327       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1029 09:37:33.453410       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:37:34.196066       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:37:34.255516       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 09:37:34.388378       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1029 09:37:34.396552       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1029 09:37:34.397669       1 controller.go:667] quota admission added evaluator for: endpoints
	I1029 09:37:34.403207       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1029 09:37:34.584085       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 09:37:35.278911       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1029 09:37:35.295361       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1029 09:37:35.314695       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1029 09:37:40.389947       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:37:40.395011       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:37:40.451639       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1029 09:37:40.675966       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [b765140c48086fb9cdef1390ccb8feaea124d37deaacfd94588fbc0242746afa] <==
	I1029 09:37:39.632173       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:37:39.632197       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1029 09:37:39.632205       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1029 09:37:39.632281       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1029 09:37:39.632878       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1029 09:37:39.632285       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1029 09:37:39.633174       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1029 09:37:39.633257       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1029 09:37:39.633288       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1029 09:37:39.632941       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1029 09:37:39.632929       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1029 09:37:39.633564       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1029 09:37:39.634762       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1029 09:37:39.634793       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1029 09:37:39.634830       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1029 09:37:39.636744       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1029 09:37:39.638211       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1029 09:37:39.638693       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1029 09:37:39.641213       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1029 09:37:39.661082       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1029 09:37:39.661145       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1029 09:37:39.661167       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1029 09:37:39.661174       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1029 09:37:39.661179       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1029 09:37:39.670608       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-194729" podCIDRs=["10.42.0.0/24"]
	
	
	==> kube-proxy [d53319f7a8b8d0b8cfda47fe7c8445a05429413961e1bdef0e2d6b1f056fd94b] <==
	I1029 09:37:41.374263       1 server_linux.go:53] "Using iptables proxy"
	I1029 09:37:41.557953       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 09:37:41.658065       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 09:37:41.658192       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1029 09:37:41.658344       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 09:37:41.706485       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:37:41.709626       1 server_linux.go:132] "Using iptables Proxier"
	I1029 09:37:41.714560       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 09:37:41.714937       1 server.go:527] "Version info" version="v1.34.1"
	I1029 09:37:41.715115       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:37:41.716542       1 config.go:200] "Starting service config controller"
	I1029 09:37:41.716602       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 09:37:41.716672       1 config.go:106] "Starting endpoint slice config controller"
	I1029 09:37:41.716815       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 09:37:41.716874       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 09:37:41.716908       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 09:37:41.717531       1 config.go:309] "Starting node config controller"
	I1029 09:37:41.783278       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 09:37:41.867888       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 09:37:41.917392       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 09:37:41.917529       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 09:37:41.917551       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5f9508d3d9e8381a251f44b8b0d7bbb719f2a400105bdd2fa0dc87f225ee602a] <==
	E1029 09:37:32.641166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1029 09:37:32.645543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1029 09:37:32.645616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1029 09:37:32.645674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1029 09:37:32.645725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1029 09:37:32.645951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1029 09:37:32.646013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1029 09:37:32.646061       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1029 09:37:32.646171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1029 09:37:33.471192       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1029 09:37:33.480265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1029 09:37:33.480486       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1029 09:37:33.512238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1029 09:37:33.534402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1029 09:37:33.582175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1029 09:37:33.604580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1029 09:37:33.648830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1029 09:37:33.671142       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1029 09:37:33.733224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1029 09:37:33.774224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1029 09:37:33.804958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1029 09:37:33.806309       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1029 09:37:33.893274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1029 09:37:33.923898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1029 09:37:36.828210       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 09:37:35 newest-cni-194729 kubelet[1297]: I1029 09:37:35.516381    1297 kubelet_node_status.go:75] "Attempting to register node" node="newest-cni-194729"
	Oct 29 09:37:35 newest-cni-194729 kubelet[1297]: I1029 09:37:35.533352    1297 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-194729"
	Oct 29 09:37:35 newest-cni-194729 kubelet[1297]: I1029 09:37:35.533456    1297 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-194729"
	Oct 29 09:37:36 newest-cni-194729 kubelet[1297]: I1029 09:37:36.219128    1297 apiserver.go:52] "Watching apiserver"
	Oct 29 09:37:36 newest-cni-194729 kubelet[1297]: I1029 09:37:36.258352    1297 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 29 09:37:36 newest-cni-194729 kubelet[1297]: I1029 09:37:36.398316    1297 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-194729"
	Oct 29 09:37:36 newest-cni-194729 kubelet[1297]: E1029 09:37:36.427125    1297 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-194729\" already exists" pod="kube-system/kube-scheduler-newest-cni-194729"
	Oct 29 09:37:36 newest-cni-194729 kubelet[1297]: I1029 09:37:36.506826    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-194729" podStartSLOduration=1.506807454 podStartE2EDuration="1.506807454s" podCreationTimestamp="2025-10-29 09:37:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:37:36.463988026 +0000 UTC m=+1.330428399" watchObservedRunningTime="2025-10-29 09:37:36.506807454 +0000 UTC m=+1.373247811"
	Oct 29 09:37:36 newest-cni-194729 kubelet[1297]: I1029 09:37:36.542084    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-194729" podStartSLOduration=1.542062834 podStartE2EDuration="1.542062834s" podCreationTimestamp="2025-10-29 09:37:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:37:36.507125668 +0000 UTC m=+1.373566025" watchObservedRunningTime="2025-10-29 09:37:36.542062834 +0000 UTC m=+1.408503216"
	Oct 29 09:37:36 newest-cni-194729 kubelet[1297]: I1029 09:37:36.562559    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-194729" podStartSLOduration=1.562539662 podStartE2EDuration="1.562539662s" podCreationTimestamp="2025-10-29 09:37:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:37:36.542337545 +0000 UTC m=+1.408777918" watchObservedRunningTime="2025-10-29 09:37:36.562539662 +0000 UTC m=+1.428980027"
	Oct 29 09:37:36 newest-cni-194729 kubelet[1297]: I1029 09:37:36.581837    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-194729" podStartSLOduration=1.5818177260000001 podStartE2EDuration="1.581817726s" podCreationTimestamp="2025-10-29 09:37:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:37:36.562896285 +0000 UTC m=+1.429336650" watchObservedRunningTime="2025-10-29 09:37:36.581817726 +0000 UTC m=+1.448258083"
	Oct 29 09:37:39 newest-cni-194729 kubelet[1297]: I1029 09:37:39.754896    1297 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 29 09:37:39 newest-cni-194729 kubelet[1297]: I1029 09:37:39.755965    1297 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 29 09:37:40 newest-cni-194729 kubelet[1297]: I1029 09:37:40.909610    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55f7dc3f-12ef-4f5e-a6ad-fe25dc8c11ad-xtables-lock\") pod \"kube-proxy-grr4p\" (UID: \"55f7dc3f-12ef-4f5e-a6ad-fe25dc8c11ad\") " pod="kube-system/kube-proxy-grr4p"
	Oct 29 09:37:40 newest-cni-194729 kubelet[1297]: I1029 09:37:40.909655    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gqfm\" (UniqueName: \"kubernetes.io/projected/55f7dc3f-12ef-4f5e-a6ad-fe25dc8c11ad-kube-api-access-4gqfm\") pod \"kube-proxy-grr4p\" (UID: \"55f7dc3f-12ef-4f5e-a6ad-fe25dc8c11ad\") " pod="kube-system/kube-proxy-grr4p"
	Oct 29 09:37:40 newest-cni-194729 kubelet[1297]: I1029 09:37:40.909695    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aaa1a0aa-75fc-418d-b140-ffa0a0dfe864-xtables-lock\") pod \"kindnet-4qfvm\" (UID: \"aaa1a0aa-75fc-418d-b140-ffa0a0dfe864\") " pod="kube-system/kindnet-4qfvm"
	Oct 29 09:37:40 newest-cni-194729 kubelet[1297]: I1029 09:37:40.909716    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99xzd\" (UniqueName: \"kubernetes.io/projected/aaa1a0aa-75fc-418d-b140-ffa0a0dfe864-kube-api-access-99xzd\") pod \"kindnet-4qfvm\" (UID: \"aaa1a0aa-75fc-418d-b140-ffa0a0dfe864\") " pod="kube-system/kindnet-4qfvm"
	Oct 29 09:37:40 newest-cni-194729 kubelet[1297]: I1029 09:37:40.909737    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/55f7dc3f-12ef-4f5e-a6ad-fe25dc8c11ad-kube-proxy\") pod \"kube-proxy-grr4p\" (UID: \"55f7dc3f-12ef-4f5e-a6ad-fe25dc8c11ad\") " pod="kube-system/kube-proxy-grr4p"
	Oct 29 09:37:40 newest-cni-194729 kubelet[1297]: I1029 09:37:40.909765    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55f7dc3f-12ef-4f5e-a6ad-fe25dc8c11ad-lib-modules\") pod \"kube-proxy-grr4p\" (UID: \"55f7dc3f-12ef-4f5e-a6ad-fe25dc8c11ad\") " pod="kube-system/kube-proxy-grr4p"
	Oct 29 09:37:40 newest-cni-194729 kubelet[1297]: I1029 09:37:40.909787    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/aaa1a0aa-75fc-418d-b140-ffa0a0dfe864-cni-cfg\") pod \"kindnet-4qfvm\" (UID: \"aaa1a0aa-75fc-418d-b140-ffa0a0dfe864\") " pod="kube-system/kindnet-4qfvm"
	Oct 29 09:37:40 newest-cni-194729 kubelet[1297]: I1029 09:37:40.909805    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aaa1a0aa-75fc-418d-b140-ffa0a0dfe864-lib-modules\") pod \"kindnet-4qfvm\" (UID: \"aaa1a0aa-75fc-418d-b140-ffa0a0dfe864\") " pod="kube-system/kindnet-4qfvm"
	Oct 29 09:37:41 newest-cni-194729 kubelet[1297]: I1029 09:37:41.075064    1297 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 29 09:37:41 newest-cni-194729 kubelet[1297]: W1029 09:37:41.136582    1297 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/e7978179791b35aa9070cf8de2c6631cc56026581af856b7eef35f6a16a4fbd5/crio-24c0a2649ee6c3a4558d7afac597b0282e71a479491bb544e4cf60416e4cc682 WatchSource:0}: Error finding container 24c0a2649ee6c3a4558d7afac597b0282e71a479491bb544e4cf60416e4cc682: Status 404 returned error can't find the container with id 24c0a2649ee6c3a4558d7afac597b0282e71a479491bb544e4cf60416e4cc682
	Oct 29 09:37:41 newest-cni-194729 kubelet[1297]: W1029 09:37:41.174836    1297 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/e7978179791b35aa9070cf8de2c6631cc56026581af856b7eef35f6a16a4fbd5/crio-26e2b963afd1d605c1c2d27a5bc13b6bcc903981c3930294397c57703c5eb761 WatchSource:0}: Error finding container 26e2b963afd1d605c1c2d27a5bc13b6bcc903981c3930294397c57703c5eb761: Status 404 returned error can't find the container with id 26e2b963afd1d605c1c2d27a5bc13b6bcc903981c3930294397c57703c5eb761
	Oct 29 09:37:41 newest-cni-194729 kubelet[1297]: I1029 09:37:41.499195    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-grr4p" podStartSLOduration=1.499166894 podStartE2EDuration="1.499166894s" podCreationTimestamp="2025-10-29 09:37:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:37:41.446802872 +0000 UTC m=+6.313243237" watchObservedRunningTime="2025-10-29 09:37:41.499166894 +0000 UTC m=+6.365607267"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-194729 -n newest-cni-194729
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-194729 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-xw4k2 storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-194729 describe pod coredns-66bc5c9577-xw4k2 storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-194729 describe pod coredns-66bc5c9577-xw4k2 storage-provisioner: exit status 1 (86.110935ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-xw4k2" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-194729 describe pod coredns-66bc5c9577-xw4k2 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-194729 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-194729 --alsologtostderr -v=1: exit status 80 (1.912913426s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-194729 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 09:38:01.771888  211596 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:38:01.772033  211596 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:38:01.772059  211596 out.go:374] Setting ErrFile to fd 2...
	I1029 09:38:01.772082  211596 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:38:01.772443  211596 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 09:38:01.772746  211596 out.go:368] Setting JSON to false
	I1029 09:38:01.772773  211596 mustload.go:66] Loading cluster: newest-cni-194729
	I1029 09:38:01.773231  211596 config.go:182] Loaded profile config "newest-cni-194729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:38:01.773818  211596 cli_runner.go:164] Run: docker container inspect newest-cni-194729 --format={{.State.Status}}
	I1029 09:38:01.795655  211596 host.go:66] Checking if "newest-cni-194729" exists ...
	I1029 09:38:01.795974  211596 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:38:01.869324  211596 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-29 09:38:01.852738106 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:38:01.869999  211596 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-194729 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1029 09:38:01.873553  211596 out.go:179] * Pausing node newest-cni-194729 ... 
	I1029 09:38:01.876549  211596 host.go:66] Checking if "newest-cni-194729" exists ...
	I1029 09:38:01.876896  211596 ssh_runner.go:195] Run: systemctl --version
	I1029 09:38:01.876946  211596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:38:01.897598  211596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/newest-cni-194729/id_rsa Username:docker}
	I1029 09:38:02.004900  211596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:38:02.027386  211596 pause.go:52] kubelet running: true
	I1029 09:38:02.027500  211596 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:38:02.295479  211596 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:38:02.295592  211596 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:38:02.377917  211596 cri.go:89] found id: "67ac493578abd7ea022b8ea1e8b902596013c41ae6c03d4bba3596e07c6a14d6"
	I1029 09:38:02.377943  211596 cri.go:89] found id: "78ea18453b16f09be64fa96ee34cd1c75e82d86e539174437bec994605f727cf"
	I1029 09:38:02.377949  211596 cri.go:89] found id: "14415e882c21abfaeb36511e3144bac1d6977e095c747a0d5797c597e8b5f6a2"
	I1029 09:38:02.377953  211596 cri.go:89] found id: "b4fa523dc72d03d9894efe1c083692461564890ac2212d9c1f44a74d1e81e268"
	I1029 09:38:02.377956  211596 cri.go:89] found id: "2bc825d65f39d967f59d22d108f0a7e5b41960b623c3cac303a998196c5da097"
	I1029 09:38:02.377966  211596 cri.go:89] found id: "3f699bcbf29302709f491025ce5a2e03043b5bd782958bc0c4354f91b754daf7"
	I1029 09:38:02.377970  211596 cri.go:89] found id: ""
	I1029 09:38:02.378022  211596 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:38:02.389334  211596 retry.go:31] will retry after 199.387431ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:38:02Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:38:02.589584  211596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:38:02.602964  211596 pause.go:52] kubelet running: false
	I1029 09:38:02.603055  211596 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:38:02.780188  211596 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:38:02.780393  211596 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:38:02.887323  211596 cri.go:89] found id: "67ac493578abd7ea022b8ea1e8b902596013c41ae6c03d4bba3596e07c6a14d6"
	I1029 09:38:02.887392  211596 cri.go:89] found id: "78ea18453b16f09be64fa96ee34cd1c75e82d86e539174437bec994605f727cf"
	I1029 09:38:02.887413  211596 cri.go:89] found id: "14415e882c21abfaeb36511e3144bac1d6977e095c747a0d5797c597e8b5f6a2"
	I1029 09:38:02.887437  211596 cri.go:89] found id: "b4fa523dc72d03d9894efe1c083692461564890ac2212d9c1f44a74d1e81e268"
	I1029 09:38:02.887467  211596 cri.go:89] found id: "2bc825d65f39d967f59d22d108f0a7e5b41960b623c3cac303a998196c5da097"
	I1029 09:38:02.887494  211596 cri.go:89] found id: "3f699bcbf29302709f491025ce5a2e03043b5bd782958bc0c4354f91b754daf7"
	I1029 09:38:02.887523  211596 cri.go:89] found id: ""
	I1029 09:38:02.887600  211596 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:38:02.898859  211596 retry.go:31] will retry after 452.471922ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:38:02Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:38:03.351534  211596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:38:03.365375  211596 pause.go:52] kubelet running: false
	I1029 09:38:03.365483  211596 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:38:03.517534  211596 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:38:03.517627  211596 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:38:03.595712  211596 cri.go:89] found id: "67ac493578abd7ea022b8ea1e8b902596013c41ae6c03d4bba3596e07c6a14d6"
	I1029 09:38:03.595736  211596 cri.go:89] found id: "78ea18453b16f09be64fa96ee34cd1c75e82d86e539174437bec994605f727cf"
	I1029 09:38:03.595742  211596 cri.go:89] found id: "14415e882c21abfaeb36511e3144bac1d6977e095c747a0d5797c597e8b5f6a2"
	I1029 09:38:03.595746  211596 cri.go:89] found id: "b4fa523dc72d03d9894efe1c083692461564890ac2212d9c1f44a74d1e81e268"
	I1029 09:38:03.595750  211596 cri.go:89] found id: "2bc825d65f39d967f59d22d108f0a7e5b41960b623c3cac303a998196c5da097"
	I1029 09:38:03.595754  211596 cri.go:89] found id: "3f699bcbf29302709f491025ce5a2e03043b5bd782958bc0c4354f91b754daf7"
	I1029 09:38:03.595758  211596 cri.go:89] found id: ""
	I1029 09:38:03.595805  211596 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:38:03.610360  211596 out.go:203] 
	W1029 09:38:03.613178  211596 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:38:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:38:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 09:38:03.613197  211596 out.go:285] * 
	* 
	W1029 09:38:03.619067  211596 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 09:38:03.622146  211596 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-194729 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-194729
helpers_test.go:243: (dbg) docker inspect newest-cni-194729:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e7978179791b35aa9070cf8de2c6631cc56026581af856b7eef35f6a16a4fbd5",
	        "Created": "2025-10-29T09:37:08.716458695Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 209960,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:37:46.496155055Z",
	            "FinishedAt": "2025-10-29T09:37:45.595637086Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/e7978179791b35aa9070cf8de2c6631cc56026581af856b7eef35f6a16a4fbd5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e7978179791b35aa9070cf8de2c6631cc56026581af856b7eef35f6a16a4fbd5/hostname",
	        "HostsPath": "/var/lib/docker/containers/e7978179791b35aa9070cf8de2c6631cc56026581af856b7eef35f6a16a4fbd5/hosts",
	        "LogPath": "/var/lib/docker/containers/e7978179791b35aa9070cf8de2c6631cc56026581af856b7eef35f6a16a4fbd5/e7978179791b35aa9070cf8de2c6631cc56026581af856b7eef35f6a16a4fbd5-json.log",
	        "Name": "/newest-cni-194729",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-194729:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-194729",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e7978179791b35aa9070cf8de2c6631cc56026581af856b7eef35f6a16a4fbd5",
	                "LowerDir": "/var/lib/docker/overlay2/5be8778fd3b9df93f0aa895218759f9aececd5c735bf336573191d9256f2e0be-init/diff:/var/lib/docker/overlay2/512c003c31c6c889d7180677d0a7d04f9641d651bb7c0f829b95ca7e47c2836c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5be8778fd3b9df93f0aa895218759f9aececd5c735bf336573191d9256f2e0be/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5be8778fd3b9df93f0aa895218759f9aececd5c735bf336573191d9256f2e0be/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5be8778fd3b9df93f0aa895218759f9aececd5c735bf336573191d9256f2e0be/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-194729",
	                "Source": "/var/lib/docker/volumes/newest-cni-194729/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-194729",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-194729",
	                "name.minikube.sigs.k8s.io": "newest-cni-194729",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b19db0a4f40deb1c725bef287ec99efe503708353e2d66b65c76d7502c149882",
	            "SandboxKey": "/var/run/docker/netns/b19db0a4f40d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-194729": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:14:c6:60:a0:d3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8f4995a956f2110ae36f130adfccc0f659ca020749dec44d4be9fd100beca009",
	                    "EndpointID": "a7875eff1266fff42307385e7ae4a1cde3dc4e8360f0a8ac510a1a3e483207e1",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-194729",
	                        "e7978179791b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-194729 -n newest-cni-194729
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-194729 -n newest-cni-194729: exit status 2 (354.162985ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-194729 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-194729 logs -n 25: (1.111564124s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p no-preload-505993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │                     │
	│ stop    │ -p no-preload-505993 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:35 UTC │
	│ addons  │ enable dashboard -p no-preload-505993 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:35 UTC │
	│ start   │ -p no-preload-505993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-946178 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │                     │
	│ stop    │ -p embed-certs-946178 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-946178 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:35 UTC │
	│ start   │ -p embed-certs-946178 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:36 UTC │
	│ image   │ no-preload-505993 image list --format=json                                                                                                                                                                                                    │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ pause   │ -p no-preload-505993 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │                     │
	│ delete  │ -p no-preload-505993                                                                                                                                                                                                                          │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ delete  │ -p no-preload-505993                                                                                                                                                                                                                          │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ delete  │ -p disable-driver-mounts-012564                                                                                                                                                                                                               │ disable-driver-mounts-012564 │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ start   │ -p default-k8s-diff-port-154565 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-154565 │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │                     │
	│ image   │ embed-certs-946178 image list --format=json                                                                                                                                                                                                   │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ pause   │ -p embed-certs-946178 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │                     │
	│ delete  │ -p embed-certs-946178                                                                                                                                                                                                                         │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:37 UTC │
	│ delete  │ -p embed-certs-946178                                                                                                                                                                                                                         │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:37 UTC │ 29 Oct 25 09:37 UTC │
	│ start   │ -p newest-cni-194729 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:37 UTC │ 29 Oct 25 09:37 UTC │
	│ addons  │ enable metrics-server -p newest-cni-194729 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:37 UTC │                     │
	│ stop    │ -p newest-cni-194729 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:37 UTC │ 29 Oct 25 09:37 UTC │
	│ addons  │ enable dashboard -p newest-cni-194729 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:37 UTC │ 29 Oct 25 09:37 UTC │
	│ start   │ -p newest-cni-194729 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:37 UTC │ 29 Oct 25 09:38 UTC │
	│ image   │ newest-cni-194729 image list --format=json                                                                                                                                                                                                    │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:38 UTC │ 29 Oct 25 09:38 UTC │
	│ pause   │ -p newest-cni-194729 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:37:46
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:37:46.236271  209832 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:37:46.236508  209832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:37:46.236518  209832 out.go:374] Setting ErrFile to fd 2...
	I1029 09:37:46.236523  209832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:37:46.236791  209832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 09:37:46.237176  209832 out.go:368] Setting JSON to false
	I1029 09:37:46.238116  209832 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4818,"bootTime":1761725848,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1029 09:37:46.238189  209832 start.go:143] virtualization:  
	I1029 09:37:46.241209  209832 out.go:179] * [newest-cni-194729] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1029 09:37:46.245005  209832 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:37:46.245177  209832 notify.go:221] Checking for updates...
	I1029 09:37:46.250685  209832 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:37:46.253381  209832 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:37:46.256197  209832 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	I1029 09:37:46.259045  209832 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1029 09:37:46.261912  209832 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:37:46.265287  209832 config.go:182] Loaded profile config "newest-cni-194729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:37:46.265856  209832 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:37:46.289818  209832 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1029 09:37:46.289939  209832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:37:46.345735  209832 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-29 09:37:46.335965268 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:37:46.346114  209832 docker.go:319] overlay module found
	I1029 09:37:46.349330  209832 out.go:179] * Using the docker driver based on existing profile
	I1029 09:37:46.351394  209832 start.go:309] selected driver: docker
	I1029 09:37:46.351410  209832 start.go:930] validating driver "docker" against &{Name:newest-cni-194729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-194729 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:37:46.351561  209832 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:37:46.354073  209832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:37:46.408450  209832 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-29 09:37:46.398427712 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:37:46.408814  209832 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1029 09:37:46.408853  209832 cni.go:84] Creating CNI manager for ""
	I1029 09:37:46.408914  209832 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:37:46.408952  209832 start.go:353] cluster config:
	{Name:newest-cni-194729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-194729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:37:46.413981  209832 out.go:179] * Starting "newest-cni-194729" primary control-plane node in "newest-cni-194729" cluster
	I1029 09:37:46.417076  209832 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:37:46.420002  209832 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:37:46.422706  209832 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:37:46.422759  209832 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1029 09:37:46.422772  209832 cache.go:59] Caching tarball of preloaded images
	I1029 09:37:46.422811  209832 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:37:46.422857  209832 preload.go:233] Found /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1029 09:37:46.422868  209832 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 09:37:46.422986  209832 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/config.json ...
	I1029 09:37:46.442318  209832 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:37:46.442344  209832 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:37:46.442356  209832 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:37:46.442379  209832 start.go:360] acquireMachinesLock for newest-cni-194729: {Name:mkd3ffc0a88229da12feec44aaf76435e580410c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:37:46.442441  209832 start.go:364] duration metric: took 35.462µs to acquireMachinesLock for "newest-cni-194729"
	I1029 09:37:46.442464  209832 start.go:96] Skipping create...Using existing machine configuration
	I1029 09:37:46.442470  209832 fix.go:54] fixHost starting: 
	I1029 09:37:46.442740  209832 cli_runner.go:164] Run: docker container inspect newest-cni-194729 --format={{.State.Status}}
	I1029 09:37:46.460186  209832 fix.go:112] recreateIfNeeded on newest-cni-194729: state=Stopped err=<nil>
	W1029 09:37:46.460216  209832 fix.go:138] unexpected machine state, will restart: <nil>
	W1029 09:37:44.119549  202937 node_ready.go:57] node "default-k8s-diff-port-154565" has "Ready":"False" status (will retry)
	W1029 09:37:46.618579  202937 node_ready.go:57] node "default-k8s-diff-port-154565" has "Ready":"False" status (will retry)
	I1029 09:37:46.463583  209832 out.go:252] * Restarting existing docker container for "newest-cni-194729" ...
	I1029 09:37:46.463671  209832 cli_runner.go:164] Run: docker start newest-cni-194729
	I1029 09:37:46.720665  209832 cli_runner.go:164] Run: docker container inspect newest-cni-194729 --format={{.State.Status}}
	I1029 09:37:46.741747  209832 kic.go:430] container "newest-cni-194729" state is running.
	I1029 09:37:46.743379  209832 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-194729
	I1029 09:37:46.766284  209832 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/config.json ...
	I1029 09:37:46.766522  209832 machine.go:94] provisionDockerMachine start ...
	I1029 09:37:46.766630  209832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:46.788703  209832 main.go:143] libmachine: Using SSH client type: native
	I1029 09:37:46.789529  209832 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1029 09:37:46.789550  209832 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 09:37:46.790783  209832 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1029 09:37:49.940048  209832 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-194729
	
	I1029 09:37:49.940072  209832 ubuntu.go:182] provisioning hostname "newest-cni-194729"
	I1029 09:37:49.940142  209832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:49.958547  209832 main.go:143] libmachine: Using SSH client type: native
	I1029 09:37:49.958844  209832 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1029 09:37:49.958858  209832 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-194729 && echo "newest-cni-194729" | sudo tee /etc/hostname
	I1029 09:37:50.125915  209832 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-194729
	
	I1029 09:37:50.125991  209832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:50.143348  209832 main.go:143] libmachine: Using SSH client type: native
	I1029 09:37:50.143716  209832 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1029 09:37:50.143742  209832 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-194729' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-194729/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-194729' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 09:37:50.296645  209832 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 09:37:50.296673  209832 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-2763/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-2763/.minikube}
	I1029 09:37:50.296695  209832 ubuntu.go:190] setting up certificates
	I1029 09:37:50.296712  209832 provision.go:84] configureAuth start
	I1029 09:37:50.296776  209832 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-194729
	I1029 09:37:50.315356  209832 provision.go:143] copyHostCerts
	I1029 09:37:50.315432  209832 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem, removing ...
	I1029 09:37:50.315452  209832 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 09:37:50.315529  209832 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem (1082 bytes)
	I1029 09:37:50.315637  209832 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem, removing ...
	I1029 09:37:50.315648  209832 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 09:37:50.315677  209832 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem (1123 bytes)
	I1029 09:37:50.315745  209832 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem, removing ...
	I1029 09:37:50.315754  209832 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 09:37:50.315780  209832 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem (1679 bytes)
	I1029 09:37:50.315843  209832 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem org=jenkins.newest-cni-194729 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-194729]
	I1029 09:37:50.449730  209832 provision.go:177] copyRemoteCerts
	I1029 09:37:50.449795  209832 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 09:37:50.449833  209832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:50.471811  209832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/newest-cni-194729/id_rsa Username:docker}
	I1029 09:37:50.576205  209832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 09:37:50.593637  209832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1029 09:37:50.613347  209832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 09:37:50.633127  209832 provision.go:87] duration metric: took 336.388525ms to configureAuth
	I1029 09:37:50.633154  209832 ubuntu.go:206] setting minikube options for container-runtime
	I1029 09:37:50.633358  209832 config.go:182] Loaded profile config "newest-cni-194729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:37:50.633464  209832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:50.654940  209832 main.go:143] libmachine: Using SSH client type: native
	I1029 09:37:50.655265  209832 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1029 09:37:50.655285  209832 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 09:37:50.956582  209832 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 09:37:50.956607  209832 machine.go:97] duration metric: took 4.190076075s to provisionDockerMachine
	I1029 09:37:50.956618  209832 start.go:293] postStartSetup for "newest-cni-194729" (driver="docker")
	I1029 09:37:50.956630  209832 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 09:37:50.956704  209832 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 09:37:50.956768  209832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:50.974822  209832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/newest-cni-194729/id_rsa Username:docker}
	I1029 09:37:51.084528  209832 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 09:37:51.087877  209832 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 09:37:51.087904  209832 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 09:37:51.087915  209832 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/addons for local assets ...
	I1029 09:37:51.087968  209832 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/files for local assets ...
	I1029 09:37:51.088052  209832 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> 45502.pem in /etc/ssl/certs
	I1029 09:37:51.088168  209832 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 09:37:51.095685  209832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /etc/ssl/certs/45502.pem (1708 bytes)
	I1029 09:37:51.114042  209832 start.go:296] duration metric: took 157.407585ms for postStartSetup
	I1029 09:37:51.114118  209832 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 09:37:51.114169  209832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:51.133989  209832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/newest-cni-194729/id_rsa Username:docker}
	W1029 09:37:49.118820  202937 node_ready.go:57] node "default-k8s-diff-port-154565" has "Ready":"False" status (will retry)
	W1029 09:37:51.121757  202937 node_ready.go:57] node "default-k8s-diff-port-154565" has "Ready":"False" status (will retry)
	I1029 09:37:51.237457  209832 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 09:37:51.242441  209832 fix.go:56] duration metric: took 4.799963975s for fixHost
	I1029 09:37:51.242465  209832 start.go:83] releasing machines lock for "newest-cni-194729", held for 4.800011072s
	I1029 09:37:51.242558  209832 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-194729
	I1029 09:37:51.260407  209832 ssh_runner.go:195] Run: cat /version.json
	I1029 09:37:51.260464  209832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:51.260720  209832 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 09:37:51.260813  209832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:51.280046  209832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/newest-cni-194729/id_rsa Username:docker}
	I1029 09:37:51.288283  209832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/newest-cni-194729/id_rsa Username:docker}
	I1029 09:37:51.472583  209832 ssh_runner.go:195] Run: systemctl --version
	I1029 09:37:51.478953  209832 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 09:37:51.517013  209832 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 09:37:51.522072  209832 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 09:37:51.522141  209832 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 09:37:51.529971  209832 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1029 09:37:51.529995  209832 start.go:496] detecting cgroup driver to use...
	I1029 09:37:51.530045  209832 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1029 09:37:51.530098  209832 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 09:37:51.545795  209832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 09:37:51.559767  209832 docker.go:218] disabling cri-docker service (if available) ...
	I1029 09:37:51.559878  209832 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 09:37:51.575920  209832 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 09:37:51.589465  209832 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 09:37:51.718490  209832 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 09:37:51.842425  209832 docker.go:234] disabling docker service ...
	I1029 09:37:51.842535  209832 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 09:37:51.859778  209832 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 09:37:51.874643  209832 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 09:37:52.000985  209832 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 09:37:52.127826  209832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 09:37:52.141437  209832 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 09:37:52.158601  209832 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 09:37:52.158710  209832 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:37:52.168210  209832 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1029 09:37:52.168375  209832 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:37:52.178235  209832 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:37:52.189634  209832 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:37:52.198861  209832 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 09:37:52.207094  209832 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:37:52.218395  209832 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:37:52.226683  209832 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:37:52.235280  209832 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 09:37:52.243965  209832 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 09:37:52.251388  209832 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:37:52.375177  209832 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 09:37:52.502409  209832 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 09:37:52.502560  209832 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 09:37:52.507028  209832 start.go:564] Will wait 60s for crictl version
	I1029 09:37:52.507138  209832 ssh_runner.go:195] Run: which crictl
	I1029 09:37:52.513778  209832 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 09:37:52.541733  209832 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 09:37:52.541911  209832 ssh_runner.go:195] Run: crio --version
	I1029 09:37:52.573881  209832 ssh_runner.go:195] Run: crio --version
	I1029 09:37:52.614136  209832 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 09:37:52.617048  209832 cli_runner.go:164] Run: docker network inspect newest-cni-194729 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:37:52.633781  209832 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1029 09:37:52.637601  209832 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:37:52.650217  209832 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1029 09:37:52.652987  209832 kubeadm.go:884] updating cluster {Name:newest-cni-194729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-194729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 09:37:52.653130  209832 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:37:52.653221  209832 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:37:52.689629  209832 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:37:52.689652  209832 crio.go:433] Images already preloaded, skipping extraction
	I1029 09:37:52.689718  209832 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:37:52.720898  209832 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:37:52.720918  209832 cache_images.go:86] Images are preloaded, skipping loading
	I1029 09:37:52.720926  209832 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1029 09:37:52.721017  209832 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-194729 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-194729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 09:37:52.721092  209832 ssh_runner.go:195] Run: crio config
	I1029 09:37:52.775574  209832 cni.go:84] Creating CNI manager for ""
	I1029 09:37:52.775641  209832 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:37:52.775678  209832 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1029 09:37:52.775748  209832 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-194729 NodeName:newest-cni-194729 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 09:37:52.775912  209832 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-194729"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 09:37:52.776028  209832 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 09:37:52.785327  209832 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 09:37:52.785403  209832 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 09:37:52.793533  209832 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1029 09:37:52.806927  209832 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 09:37:52.821164  209832 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1029 09:37:52.835093  209832 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1029 09:37:52.838849  209832 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:37:52.849601  209832 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:37:52.972402  209832 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:37:52.989865  209832 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729 for IP: 192.168.85.2
	I1029 09:37:52.989889  209832 certs.go:195] generating shared ca certs ...
	I1029 09:37:52.989906  209832 certs.go:227] acquiring lock for ca certs: {Name:mk715611293338c39436fe9072c5ce0c2fb993a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:37:52.990032  209832 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key
	I1029 09:37:52.990077  209832 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key
	I1029 09:37:52.990088  209832 certs.go:257] generating profile certs ...
	I1029 09:37:52.990166  209832 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/client.key
	I1029 09:37:52.990244  209832 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/apiserver.key.f97f549a
	I1029 09:37:52.990292  209832 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/proxy-client.key
	I1029 09:37:52.990401  209832 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem (1338 bytes)
	W1029 09:37:52.990445  209832 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550_empty.pem, impossibly tiny 0 bytes
	I1029 09:37:52.990459  209832 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem (1679 bytes)
	I1029 09:37:52.990486  209832 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem (1082 bytes)
	I1029 09:37:52.990511  209832 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem (1123 bytes)
	I1029 09:37:52.990535  209832 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem (1679 bytes)
	I1029 09:37:52.990585  209832 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem (1708 bytes)
	I1029 09:37:52.991152  209832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 09:37:53.019704  209832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 09:37:53.043086  209832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 09:37:53.065990  209832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1029 09:37:53.089826  209832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1029 09:37:53.113439  209832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 09:37:53.135950  209832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 09:37:53.163790  209832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1029 09:37:53.188211  209832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem --> /usr/share/ca-certificates/4550.pem (1338 bytes)
	I1029 09:37:53.211390  209832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /usr/share/ca-certificates/45502.pem (1708 bytes)
	I1029 09:37:53.234831  209832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 09:37:53.253936  209832 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 09:37:53.270571  209832 ssh_runner.go:195] Run: openssl version
	I1029 09:37:53.277026  209832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 09:37:53.288543  209832 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:37:53.292163  209832 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:37:53.292267  209832 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:37:53.339307  209832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 09:37:53.352376  209832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4550.pem && ln -fs /usr/share/ca-certificates/4550.pem /etc/ssl/certs/4550.pem"
	I1029 09:37:53.362884  209832 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4550.pem
	I1029 09:37:53.367195  209832 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:27 /usr/share/ca-certificates/4550.pem
	I1029 09:37:53.367270  209832 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4550.pem
	I1029 09:37:53.409375  209832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4550.pem /etc/ssl/certs/51391683.0"
	I1029 09:37:53.417386  209832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/45502.pem && ln -fs /usr/share/ca-certificates/45502.pem /etc/ssl/certs/45502.pem"
	I1029 09:37:53.425810  209832 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/45502.pem
	I1029 09:37:53.429463  209832 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:27 /usr/share/ca-certificates/45502.pem
	I1029 09:37:53.429581  209832 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/45502.pem
	I1029 09:37:53.475895  209832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/45502.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 09:37:53.483702  209832 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 09:37:53.488410  209832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1029 09:37:53.529515  209832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1029 09:37:53.578455  209832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1029 09:37:53.622347  209832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1029 09:37:53.691876  209832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1029 09:37:53.742113  209832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1029 09:37:53.854449  209832 kubeadm.go:401] StartCluster: {Name:newest-cni-194729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-194729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:37:53.854550  209832 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 09:37:53.854643  209832 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 09:37:53.913359  209832 cri.go:89] found id: "14415e882c21abfaeb36511e3144bac1d6977e095c747a0d5797c597e8b5f6a2"
	I1029 09:37:53.913395  209832 cri.go:89] found id: "b4fa523dc72d03d9894efe1c083692461564890ac2212d9c1f44a74d1e81e268"
	I1029 09:37:53.913401  209832 cri.go:89] found id: "2bc825d65f39d967f59d22d108f0a7e5b41960b623c3cac303a998196c5da097"
	I1029 09:37:53.913406  209832 cri.go:89] found id: "3f699bcbf29302709f491025ce5a2e03043b5bd782958bc0c4354f91b754daf7"
	I1029 09:37:53.913409  209832 cri.go:89] found id: ""
	I1029 09:37:53.913467  209832 ssh_runner.go:195] Run: sudo runc list -f json
	W1029 09:37:53.931406  209832 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:37:53Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:37:53.931515  209832 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 09:37:53.946080  209832 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1029 09:37:53.946153  209832 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1029 09:37:53.946234  209832 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1029 09:37:53.959543  209832 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1029 09:37:53.960205  209832 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-194729" does not appear in /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:37:53.960519  209832 kubeconfig.go:62] /home/jenkins/minikube-integration/21800-2763/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-194729" cluster setting kubeconfig missing "newest-cni-194729" context setting]
	I1029 09:37:53.960973  209832 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:37:53.962573  209832 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1029 09:37:53.993252  209832 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1029 09:37:53.993329  209832 kubeadm.go:602] duration metric: took 47.155238ms to restartPrimaryControlPlane
	I1029 09:37:53.993355  209832 kubeadm.go:403] duration metric: took 138.917319ms to StartCluster
	I1029 09:37:53.993402  209832 settings.go:142] acquiring lock: {Name:mkeba1c0d8e3656c2a05b6b6f1f81184498df216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:37:53.993484  209832 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:37:53.994451  209832 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:37:53.994711  209832 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:37:53.995095  209832 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 09:37:53.995163  209832 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-194729"
	I1029 09:37:53.995181  209832 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-194729"
	W1029 09:37:53.995187  209832 addons.go:248] addon storage-provisioner should already be in state true
	I1029 09:37:53.995208  209832 host.go:66] Checking if "newest-cni-194729" exists ...
	I1029 09:37:53.995663  209832 cli_runner.go:164] Run: docker container inspect newest-cni-194729 --format={{.State.Status}}
	I1029 09:37:53.996052  209832 config.go:182] Loaded profile config "newest-cni-194729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:37:53.996131  209832 addons.go:70] Setting dashboard=true in profile "newest-cni-194729"
	I1029 09:37:53.996168  209832 addons.go:239] Setting addon dashboard=true in "newest-cni-194729"
	W1029 09:37:53.996193  209832 addons.go:248] addon dashboard should already be in state true
	I1029 09:37:53.996233  209832 host.go:66] Checking if "newest-cni-194729" exists ...
	I1029 09:37:53.996684  209832 cli_runner.go:164] Run: docker container inspect newest-cni-194729 --format={{.State.Status}}
	I1029 09:37:53.998784  209832 addons.go:70] Setting default-storageclass=true in profile "newest-cni-194729"
	I1029 09:37:53.998841  209832 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-194729"
	I1029 09:37:53.999157  209832 cli_runner.go:164] Run: docker container inspect newest-cni-194729 --format={{.State.Status}}
	I1029 09:37:53.999215  209832 out.go:179] * Verifying Kubernetes components...
	I1029 09:37:54.011735  209832 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:37:54.050279  209832 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1029 09:37:54.054200  209832 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 09:37:54.057502  209832 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1029 09:37:54.057808  209832 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:37:54.057824  209832 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1029 09:37:54.057889  209832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:54.062144  209832 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1029 09:37:54.062170  209832 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1029 09:37:54.062237  209832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:54.066826  209832 addons.go:239] Setting addon default-storageclass=true in "newest-cni-194729"
	W1029 09:37:54.066847  209832 addons.go:248] addon default-storageclass should already be in state true
	I1029 09:37:54.066872  209832 host.go:66] Checking if "newest-cni-194729" exists ...
	I1029 09:37:54.067315  209832 cli_runner.go:164] Run: docker container inspect newest-cni-194729 --format={{.State.Status}}
	I1029 09:37:54.129737  209832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/newest-cni-194729/id_rsa Username:docker}
	I1029 09:37:54.139439  209832 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1029 09:37:54.139459  209832 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1029 09:37:54.139519  209832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:54.142634  209832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/newest-cni-194729/id_rsa Username:docker}
	I1029 09:37:54.170978  209832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/newest-cni-194729/id_rsa Username:docker}
	I1029 09:37:54.383048  209832 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:37:54.397765  209832 api_server.go:52] waiting for apiserver process to appear ...
	I1029 09:37:54.397902  209832 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:37:54.416908  209832 api_server.go:72] duration metric: took 422.127604ms to wait for apiserver process to appear ...
	I1029 09:37:54.416980  209832 api_server.go:88] waiting for apiserver healthz status ...
	I1029 09:37:54.417012  209832 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:37:54.428793  209832 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1029 09:37:54.428814  209832 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1029 09:37:54.447704  209832 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:37:54.464150  209832 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1029 09:37:54.517321  209832 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1029 09:37:54.517341  209832 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1029 09:37:54.585479  209832 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1029 09:37:54.585508  209832 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1029 09:37:54.636632  209832 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1029 09:37:54.636663  209832 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1029 09:37:54.671039  209832 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1029 09:37:54.671078  209832 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1029 09:37:54.691862  209832 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1029 09:37:54.691898  209832 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1029 09:37:54.717320  209832 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1029 09:37:54.717345  209832 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1029 09:37:54.742088  209832 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1029 09:37:54.742123  209832 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1029 09:37:54.766963  209832 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1029 09:37:54.766989  209832 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1029 09:37:54.791694  209832 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1029 09:37:53.619629  202937 node_ready.go:57] node "default-k8s-diff-port-154565" has "Ready":"False" status (will retry)
	W1029 09:37:56.118884  202937 node_ready.go:57] node "default-k8s-diff-port-154565" has "Ready":"False" status (will retry)
	I1029 09:37:58.554006  209832 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1029 09:37:58.554036  209832 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1029 09:37:58.554051  209832 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:37:59.070764  209832 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 09:37:59.070803  209832 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1029 09:37:59.070818  209832 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:37:59.101328  209832 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 09:37:59.101358  209832 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1029 09:37:59.417822  209832 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:37:59.434438  209832 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 09:37:59.434480  209832 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1029 09:37:59.917826  209832 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:37:59.937364  209832 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 09:37:59.937392  209832 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1029 09:38:00.417941  209832 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:38:00.453237  209832 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 09:38:00.453265  209832 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1029 09:38:00.545865  209832 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.098130712s)
	I1029 09:38:00.545938  209832 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.08176917s)
	I1029 09:38:00.546332  209832 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.754586379s)
	I1029 09:38:00.549637  209832 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-194729 addons enable metrics-server
	
	I1029 09:38:00.572623  209832 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1029 09:38:00.575643  209832 addons.go:515] duration metric: took 6.580531736s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1029 09:38:00.917266  209832 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:38:00.928941  209832 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1029 09:38:00.930425  209832 api_server.go:141] control plane version: v1.34.1
	I1029 09:38:00.930516  209832 api_server.go:131] duration metric: took 6.513506521s to wait for apiserver health ...
	I1029 09:38:00.930543  209832 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 09:38:00.934799  209832 system_pods.go:59] 8 kube-system pods found
	I1029 09:38:00.934834  209832 system_pods.go:61] "coredns-66bc5c9577-xw4k2" [16536d62-45b6-4dbb-a119-2f03bc0dab76] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1029 09:38:00.934844  209832 system_pods.go:61] "etcd-newest-cni-194729" [a73be25e-1001-47bc-a73e-81d0b4a407a5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:38:00.934849  209832 system_pods.go:61] "kindnet-4qfvm" [aaa1a0aa-75fc-418d-b140-ffa0a0dfe864] Running
	I1029 09:38:00.934856  209832 system_pods.go:61] "kube-apiserver-newest-cni-194729" [ac1d73e9-32a0-47f5-9b54-d3e7441d00c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:38:00.934862  209832 system_pods.go:61] "kube-controller-manager-newest-cni-194729" [c77285b8-4ae6-4f1b-8552-2c45a600d458] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:38:00.934868  209832 system_pods.go:61] "kube-proxy-grr4p" [55f7dc3f-12ef-4f5e-a6ad-fe25dc8c11ad] Running
	I1029 09:38:00.934880  209832 system_pods.go:61] "kube-scheduler-newest-cni-194729" [189fc533-1dab-4e25-8187-da8f16a8a131] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:38:00.934885  209832 system_pods.go:61] "storage-provisioner" [a55079c0-4415-4c57-b3db-6c95a7876df1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1029 09:38:00.934892  209832 system_pods.go:74] duration metric: took 4.328835ms to wait for pod list to return data ...
	I1029 09:38:00.934906  209832 default_sa.go:34] waiting for default service account to be created ...
	I1029 09:38:00.937100  209832 default_sa.go:45] found service account: "default"
	I1029 09:38:00.937136  209832 default_sa.go:55] duration metric: took 2.223708ms for default service account to be created ...
	I1029 09:38:00.937149  209832 kubeadm.go:587] duration metric: took 6.942389123s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1029 09:38:00.937165  209832 node_conditions.go:102] verifying NodePressure condition ...
	I1029 09:38:00.939444  209832 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1029 09:38:00.939479  209832 node_conditions.go:123] node cpu capacity is 2
	I1029 09:38:00.939493  209832 node_conditions.go:105] duration metric: took 2.322951ms to run NodePressure ...
	I1029 09:38:00.939513  209832 start.go:242] waiting for startup goroutines ...
	I1029 09:38:00.939526  209832 start.go:247] waiting for cluster config update ...
	I1029 09:38:00.939539  209832 start.go:256] writing updated cluster config ...
	I1029 09:38:00.939872  209832 ssh_runner.go:195] Run: rm -f paused
	I1029 09:38:01.011956  209832 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1029 09:38:01.017133  209832 out.go:179] * Done! kubectl is now configured to use "newest-cni-194729" cluster and "default" namespace by default
	W1029 09:37:58.619040  202937 node_ready.go:57] node "default-k8s-diff-port-154565" has "Ready":"False" status (will retry)
	W1029 09:38:00.619835  202937 node_ready.go:57] node "default-k8s-diff-port-154565" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.411522224Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.416090741Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-grr4p/POD" id=fe069ffd-fb48-417d-969e-3c64c7084299 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.416158516Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.436778287Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e8316903-7939-4898-870d-203fe1bc64ba name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.444633882Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=fe069ffd-fb48-417d-969e-3c64c7084299 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.470124056Z" level=info msg="Ran pod sandbox 41e71ebfcaddff5ebc8eda68751a2c3b9785bfbfa3a818ff0120c80b5a276319 with infra container: kube-system/kindnet-4qfvm/POD" id=e8316903-7939-4898-870d-203fe1bc64ba name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.473591508Z" level=info msg="Ran pod sandbox 3c2bf2fe4e94ede75df53cdf1c53738cbc012bd47bc6887c9cdc33e6ca8dba46 with infra container: kube-system/kube-proxy-grr4p/POD" id=fe069ffd-fb48-417d-969e-3c64c7084299 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.488805578Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=e8fe9219-f2b9-4e90-a5aa-7fb667905eac name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.48911804Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=24764a62-22b8-4d4c-9a9e-3bfa08c50389 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.490603452Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=ba88c1a4-253f-4e4e-b92c-5213d7180f73 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.491890069Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=7d91bec2-d6fe-4e50-a99c-80af79f639ae name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.493891999Z" level=info msg="Creating container: kube-system/kube-proxy-grr4p/kube-proxy" id=3a958890-989e-4dad-b3b0-fa0aeb7cb594 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.493990584Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.498858156Z" level=info msg="Creating container: kube-system/kindnet-4qfvm/kindnet-cni" id=61c93bdd-2639-4072-a345-79cf6d7048c8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.499145215Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.522744271Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.523263668Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.524183792Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.52544663Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.564048954Z" level=info msg="Created container 78ea18453b16f09be64fa96ee34cd1c75e82d86e539174437bec994605f727cf: kube-system/kindnet-4qfvm/kindnet-cni" id=61c93bdd-2639-4072-a345-79cf6d7048c8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.568547514Z" level=info msg="Starting container: 78ea18453b16f09be64fa96ee34cd1c75e82d86e539174437bec994605f727cf" id=502143b3-a7d2-4c0c-bb80-bfba2042e94e name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.576803367Z" level=info msg="Started container" PID=1064 containerID=78ea18453b16f09be64fa96ee34cd1c75e82d86e539174437bec994605f727cf description=kube-system/kindnet-4qfvm/kindnet-cni id=502143b3-a7d2-4c0c-bb80-bfba2042e94e name=/runtime.v1.RuntimeService/StartContainer sandboxID=41e71ebfcaddff5ebc8eda68751a2c3b9785bfbfa3a818ff0120c80b5a276319
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.670508604Z" level=info msg="Created container 67ac493578abd7ea022b8ea1e8b902596013c41ae6c03d4bba3596e07c6a14d6: kube-system/kube-proxy-grr4p/kube-proxy" id=3a958890-989e-4dad-b3b0-fa0aeb7cb594 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.678015248Z" level=info msg="Starting container: 67ac493578abd7ea022b8ea1e8b902596013c41ae6c03d4bba3596e07c6a14d6" id=bfeac4b2-550d-40cd-81ff-053463397cec name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.68959415Z" level=info msg="Started container" PID=1067 containerID=67ac493578abd7ea022b8ea1e8b902596013c41ae6c03d4bba3596e07c6a14d6 description=kube-system/kube-proxy-grr4p/kube-proxy id=bfeac4b2-550d-40cd-81ff-053463397cec name=/runtime.v1.RuntimeService/StartContainer sandboxID=3c2bf2fe4e94ede75df53cdf1c53738cbc012bd47bc6887c9cdc33e6ca8dba46
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	67ac493578abd       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 seconds ago       Running             kube-proxy                1                   3c2bf2fe4e94e       kube-proxy-grr4p                            kube-system
	78ea18453b16f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 seconds ago       Running             kindnet-cni               1                   41e71ebfcaddf       kindnet-4qfvm                               kube-system
	14415e882c21a       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   10 seconds ago      Running             etcd                      1                   8a6f69794b305       etcd-newest-cni-194729                      kube-system
	b4fa523dc72d0       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   10 seconds ago      Running             kube-scheduler            1                   7eaad0cc42a09       kube-scheduler-newest-cni-194729            kube-system
	2bc825d65f39d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   10 seconds ago      Running             kube-controller-manager   1                   f09534d9c2fca       kube-controller-manager-newest-cni-194729   kube-system
	3f699bcbf2930       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   10 seconds ago      Running             kube-apiserver            1                   e44cf3d897afa       kube-apiserver-newest-cni-194729            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-194729
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-194729
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=newest-cni-194729
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_37_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:37:32 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-194729
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:37:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:37:59 +0000   Wed, 29 Oct 2025 09:37:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:37:59 +0000   Wed, 29 Oct 2025 09:37:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:37:59 +0000   Wed, 29 Oct 2025 09:37:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 29 Oct 2025 09:37:59 +0000   Wed, 29 Oct 2025 09:37:28 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-194729
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                c2631ecb-f2d1-41a4-93ae-1b71955be2b7
	  Boot ID:                    dcb5a4aa-84b0-4edf-aeb7-a96ccf6c0882
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-194729                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         29s
	  kube-system                 kindnet-4qfvm                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-newest-cni-194729             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-newest-cni-194729    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-grr4p                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-newest-cni-194729             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 23s                kube-proxy       
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node newest-cni-194729 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node newest-cni-194729 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     36s (x8 over 36s)  kubelet          Node newest-cni-194729 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    29s                kubelet          Node newest-cni-194729 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 29s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  29s                kubelet          Node newest-cni-194729 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     29s                kubelet          Node newest-cni-194729 status is now: NodeHasSufficientPID
	  Normal   Starting                 29s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           25s                node-controller  Node newest-cni-194729 event: Registered Node newest-cni-194729 in Controller
	  Normal   Starting                 11s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11s (x8 over 11s)  kubelet          Node newest-cni-194729 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11s (x8 over 11s)  kubelet          Node newest-cni-194729 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11s (x8 over 11s)  kubelet          Node newest-cni-194729 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2s                 node-controller  Node newest-cni-194729 event: Registered Node newest-cni-194729 in Controller
	
	
	==> dmesg <==
	[  +4.070732] overlayfs: idmapped layers are currently not supported
	[Oct29 09:11] overlayfs: idmapped layers are currently not supported
	[ +18.424492] overlayfs: idmapped layers are currently not supported
	[  +4.342269] hrtimer: interrupt took 2289025 ns
	[Oct29 09:12] overlayfs: idmapped layers are currently not supported
	[Oct29 09:13] overlayfs: idmapped layers are currently not supported
	[Oct29 09:14] overlayfs: idmapped layers are currently not supported
	[Oct29 09:20] overlayfs: idmapped layers are currently not supported
	[Oct29 09:23] overlayfs: idmapped layers are currently not supported
	[Oct29 09:24] overlayfs: idmapped layers are currently not supported
	[ +30.917844] overlayfs: idmapped layers are currently not supported
	[Oct29 09:27] overlayfs: idmapped layers are currently not supported
	[Oct29 09:29] overlayfs: idmapped layers are currently not supported
	[Oct29 09:30] overlayfs: idmapped layers are currently not supported
	[  +5.608805] overlayfs: idmapped layers are currently not supported
	[ +37.422429] overlayfs: idmapped layers are currently not supported
	[Oct29 09:31] overlayfs: idmapped layers are currently not supported
	[Oct29 09:32] overlayfs: idmapped layers are currently not supported
	[Oct29 09:34] overlayfs: idmapped layers are currently not supported
	[ +22.728709] overlayfs: idmapped layers are currently not supported
	[Oct29 09:35] overlayfs: idmapped layers are currently not supported
	[ +21.902387] overlayfs: idmapped layers are currently not supported
	[Oct29 09:37] overlayfs: idmapped layers are currently not supported
	[ +19.842209] overlayfs: idmapped layers are currently not supported
	[ +25.062735] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [14415e882c21abfaeb36511e3144bac1d6977e095c747a0d5797c597e8b5f6a2] <==
	{"level":"warn","ts":"2025-10-29T09:37:57.462539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.480548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.497283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.516613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.532479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.545905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.564960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.584801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.605222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.633214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.657567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.671450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.680601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.701228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.713617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.730854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.747854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.762828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.777682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.794775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.829963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.850394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.864260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.878327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.934572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42494","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:38:04 up  1:20,  0 user,  load average: 3.94, 3.89, 3.02
	Linux newest-cni-194729 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [78ea18453b16f09be64fa96ee34cd1c75e82d86e539174437bec994605f727cf] <==
	I1029 09:37:59.754668       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:37:59.755008       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1029 09:37:59.755105       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:37:59.755117       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:37:59.755130       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:37:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:37:59.952661       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:37:59.952679       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:37:59.952777       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:37:59.954508       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [3f699bcbf29302709f491025ce5a2e03043b5bd782958bc0c4354f91b754daf7] <==
	I1029 09:37:58.988846       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1029 09:37:58.842750       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1029 09:37:58.989044       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1029 09:37:58.841321       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1029 09:37:58.842670       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1029 09:37:59.041783       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1029 09:37:59.041892       1 aggregator.go:171] initial CRD sync complete...
	I1029 09:37:59.041903       1 autoregister_controller.go:144] Starting autoregister controller
	I1029 09:37:59.041910       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1029 09:37:59.041916       1 cache.go:39] Caches are synced for autoregister controller
	I1029 09:37:59.046708       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1029 09:37:59.090457       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1029 09:37:59.186488       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 09:37:59.526800       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:37:59.815578       1 controller.go:667] quota admission added evaluator for: namespaces
	I1029 09:37:59.908237       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1029 09:37:59.980570       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:38:00.017660       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 09:38:00.363118       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.14.58"}
	I1029 09:38:00.428457       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.75.113"}
	I1029 09:38:02.518049       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1029 09:38:02.567128       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1029 09:38:02.567128       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1029 09:38:02.626336       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1029 09:38:02.672193       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [2bc825d65f39d967f59d22d108f0a7e5b41960b623c3cac303a998196c5da097] <==
	I1029 09:38:02.100081       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:38:02.100122       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1029 09:38:02.100132       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1029 09:38:02.105271       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1029 09:38:02.111058       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1029 09:38:02.112844       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1029 09:38:02.115018       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1029 09:38:02.115172       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1029 09:38:02.115639       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-194729"
	I1029 09:38:02.115723       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1029 09:38:02.115787       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1029 09:38:02.115956       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1029 09:38:02.134789       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1029 09:38:02.134957       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:38:02.137463       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1029 09:38:02.139688       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1029 09:38:02.141987       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1029 09:38:02.152104       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1029 09:38:02.152224       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1029 09:38:02.152237       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1029 09:38:02.155762       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1029 09:38:02.160425       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1029 09:38:02.170322       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:38:02.170468       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1029 09:38:02.174460       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [67ac493578abd7ea022b8ea1e8b902596013c41ae6c03d4bba3596e07c6a14d6] <==
	I1029 09:37:59.840368       1 server_linux.go:53] "Using iptables proxy"
	I1029 09:37:59.961125       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 09:38:00.079765       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 09:38:00.079812       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1029 09:38:00.079885       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 09:38:00.379197       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:38:00.379356       1 server_linux.go:132] "Using iptables Proxier"
	I1029 09:38:00.393010       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 09:38:00.393466       1 server.go:527] "Version info" version="v1.34.1"
	I1029 09:38:00.393487       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:38:00.400199       1 config.go:200] "Starting service config controller"
	I1029 09:38:00.412277       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 09:38:00.432499       1 config.go:106] "Starting endpoint slice config controller"
	I1029 09:38:00.432523       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 09:38:00.432601       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 09:38:00.432607       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 09:38:00.433563       1 config.go:309] "Starting node config controller"
	I1029 09:38:00.433572       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 09:38:00.433579       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 09:38:00.516008       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 09:38:00.533482       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 09:38:00.533529       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b4fa523dc72d03d9894efe1c083692461564890ac2212d9c1f44a74d1e81e268] <==
	I1029 09:37:55.265556       1 serving.go:386] Generated self-signed cert in-memory
	W1029 09:37:58.632853       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1029 09:37:58.632890       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1029 09:37:58.632900       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1029 09:37:58.632909       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1029 09:37:58.796927       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1029 09:37:58.796959       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:37:58.808273       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:37:58.808339       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:37:58.809419       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1029 09:37:58.812110       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1029 09:37:58.911140       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 09:37:57 newest-cni-194729 kubelet[728]: E1029 09:37:57.536019     728 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-194729\" not found" node="newest-cni-194729"
	Oct 29 09:37:58 newest-cni-194729 kubelet[728]: I1029 09:37:58.638211     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-194729"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: I1029 09:37:59.088116     728 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-194729"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: I1029 09:37:59.088223     728 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-194729"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: I1029 09:37:59.088250     728 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: I1029 09:37:59.089290     728 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: I1029 09:37:59.099933     728 apiserver.go:52] "Watching apiserver"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: E1029 09:37:59.105213     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-194729\" already exists" pod="kube-system/kube-scheduler-newest-cni-194729"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: I1029 09:37:59.105242     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-194729"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: I1029 09:37:59.130737     728 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: E1029 09:37:59.135386     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-194729\" already exists" pod="kube-system/etcd-newest-cni-194729"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: I1029 09:37:59.135430     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-194729"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: E1029 09:37:59.156112     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-194729\" already exists" pod="kube-system/kube-apiserver-newest-cni-194729"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: I1029 09:37:59.156145     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-194729"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: I1029 09:37:59.173336     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55f7dc3f-12ef-4f5e-a6ad-fe25dc8c11ad-xtables-lock\") pod \"kube-proxy-grr4p\" (UID: \"55f7dc3f-12ef-4f5e-a6ad-fe25dc8c11ad\") " pod="kube-system/kube-proxy-grr4p"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: I1029 09:37:59.173384     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/aaa1a0aa-75fc-418d-b140-ffa0a0dfe864-cni-cfg\") pod \"kindnet-4qfvm\" (UID: \"aaa1a0aa-75fc-418d-b140-ffa0a0dfe864\") " pod="kube-system/kindnet-4qfvm"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: I1029 09:37:59.173406     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aaa1a0aa-75fc-418d-b140-ffa0a0dfe864-lib-modules\") pod \"kindnet-4qfvm\" (UID: \"aaa1a0aa-75fc-418d-b140-ffa0a0dfe864\") " pod="kube-system/kindnet-4qfvm"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: I1029 09:37:59.173452     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55f7dc3f-12ef-4f5e-a6ad-fe25dc8c11ad-lib-modules\") pod \"kube-proxy-grr4p\" (UID: \"55f7dc3f-12ef-4f5e-a6ad-fe25dc8c11ad\") " pod="kube-system/kube-proxy-grr4p"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: I1029 09:37:59.173486     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aaa1a0aa-75fc-418d-b140-ffa0a0dfe864-xtables-lock\") pod \"kindnet-4qfvm\" (UID: \"aaa1a0aa-75fc-418d-b140-ffa0a0dfe864\") " pod="kube-system/kindnet-4qfvm"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: E1029 09:37:59.173772     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-194729\" already exists" pod="kube-system/kube-controller-manager-newest-cni-194729"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: I1029 09:37:59.242450     728 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: W1029 09:37:59.458551     728 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/e7978179791b35aa9070cf8de2c6631cc56026581af856b7eef35f6a16a4fbd5/crio-41e71ebfcaddff5ebc8eda68751a2c3b9785bfbfa3a818ff0120c80b5a276319 WatchSource:0}: Error finding container 41e71ebfcaddff5ebc8eda68751a2c3b9785bfbfa3a818ff0120c80b5a276319: Status 404 returned error can't find the container with id 41e71ebfcaddff5ebc8eda68751a2c3b9785bfbfa3a818ff0120c80b5a276319
	Oct 29 09:38:02 newest-cni-194729 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 29 09:38:02 newest-cni-194729 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 29 09:38:02 newest-cni-194729 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-194729 -n newest-cni-194729
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-194729 -n newest-cni-194729: exit status 2 (400.011777ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-194729 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-xw4k2 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-65slf kubernetes-dashboard-855c9754f9-wwlgl
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-194729 describe pod coredns-66bc5c9577-xw4k2 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-65slf kubernetes-dashboard-855c9754f9-wwlgl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-194729 describe pod coredns-66bc5c9577-xw4k2 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-65slf kubernetes-dashboard-855c9754f9-wwlgl: exit status 1 (106.631279ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-xw4k2" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-65slf" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-wwlgl" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-194729 describe pod coredns-66bc5c9577-xw4k2 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-65slf kubernetes-dashboard-855c9754f9-wwlgl: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-194729
helpers_test.go:243: (dbg) docker inspect newest-cni-194729:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e7978179791b35aa9070cf8de2c6631cc56026581af856b7eef35f6a16a4fbd5",
	        "Created": "2025-10-29T09:37:08.716458695Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 209960,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:37:46.496155055Z",
	            "FinishedAt": "2025-10-29T09:37:45.595637086Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/e7978179791b35aa9070cf8de2c6631cc56026581af856b7eef35f6a16a4fbd5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e7978179791b35aa9070cf8de2c6631cc56026581af856b7eef35f6a16a4fbd5/hostname",
	        "HostsPath": "/var/lib/docker/containers/e7978179791b35aa9070cf8de2c6631cc56026581af856b7eef35f6a16a4fbd5/hosts",
	        "LogPath": "/var/lib/docker/containers/e7978179791b35aa9070cf8de2c6631cc56026581af856b7eef35f6a16a4fbd5/e7978179791b35aa9070cf8de2c6631cc56026581af856b7eef35f6a16a4fbd5-json.log",
	        "Name": "/newest-cni-194729",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-194729:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-194729",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e7978179791b35aa9070cf8de2c6631cc56026581af856b7eef35f6a16a4fbd5",
	                "LowerDir": "/var/lib/docker/overlay2/5be8778fd3b9df93f0aa895218759f9aececd5c735bf336573191d9256f2e0be-init/diff:/var/lib/docker/overlay2/512c003c31c6c889d7180677d0a7d04f9641d651bb7c0f829b95ca7e47c2836c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5be8778fd3b9df93f0aa895218759f9aececd5c735bf336573191d9256f2e0be/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5be8778fd3b9df93f0aa895218759f9aececd5c735bf336573191d9256f2e0be/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5be8778fd3b9df93f0aa895218759f9aececd5c735bf336573191d9256f2e0be/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-194729",
	                "Source": "/var/lib/docker/volumes/newest-cni-194729/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-194729",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-194729",
	                "name.minikube.sigs.k8s.io": "newest-cni-194729",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b19db0a4f40deb1c725bef287ec99efe503708353e2d66b65c76d7502c149882",
	            "SandboxKey": "/var/run/docker/netns/b19db0a4f40d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-194729": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:14:c6:60:a0:d3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8f4995a956f2110ae36f130adfccc0f659ca020749dec44d4be9fd100beca009",
	                    "EndpointID": "a7875eff1266fff42307385e7ae4a1cde3dc4e8360f0a8ac510a1a3e483207e1",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-194729",
	                        "e7978179791b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-194729 -n newest-cni-194729
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-194729 -n newest-cni-194729: exit status 2 (455.87272ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-194729 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-194729 logs -n 25: (1.112131165s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p no-preload-505993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │                     │
	│ stop    │ -p no-preload-505993 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:35 UTC │
	│ addons  │ enable dashboard -p no-preload-505993 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:35 UTC │
	│ start   │ -p no-preload-505993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-946178 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │                     │
	│ stop    │ -p embed-certs-946178 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-946178 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:35 UTC │
	│ start   │ -p embed-certs-946178 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:36 UTC │
	│ image   │ no-preload-505993 image list --format=json                                                                                                                                                                                                    │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ pause   │ -p no-preload-505993 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │                     │
	│ delete  │ -p no-preload-505993                                                                                                                                                                                                                          │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ delete  │ -p no-preload-505993                                                                                                                                                                                                                          │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ delete  │ -p disable-driver-mounts-012564                                                                                                                                                                                                               │ disable-driver-mounts-012564 │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ start   │ -p default-k8s-diff-port-154565 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-154565 │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │                     │
	│ image   │ embed-certs-946178 image list --format=json                                                                                                                                                                                                   │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ pause   │ -p embed-certs-946178 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │                     │
	│ delete  │ -p embed-certs-946178                                                                                                                                                                                                                         │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:37 UTC │
	│ delete  │ -p embed-certs-946178                                                                                                                                                                                                                         │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:37 UTC │ 29 Oct 25 09:37 UTC │
	│ start   │ -p newest-cni-194729 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:37 UTC │ 29 Oct 25 09:37 UTC │
	│ addons  │ enable metrics-server -p newest-cni-194729 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:37 UTC │                     │
	│ stop    │ -p newest-cni-194729 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:37 UTC │ 29 Oct 25 09:37 UTC │
	│ addons  │ enable dashboard -p newest-cni-194729 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:37 UTC │ 29 Oct 25 09:37 UTC │
	│ start   │ -p newest-cni-194729 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:37 UTC │ 29 Oct 25 09:38 UTC │
	│ image   │ newest-cni-194729 image list --format=json                                                                                                                                                                                                    │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:38 UTC │ 29 Oct 25 09:38 UTC │
	│ pause   │ -p newest-cni-194729 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:37:46
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:37:46.236271  209832 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:37:46.236508  209832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:37:46.236518  209832 out.go:374] Setting ErrFile to fd 2...
	I1029 09:37:46.236523  209832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:37:46.236791  209832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 09:37:46.237176  209832 out.go:368] Setting JSON to false
	I1029 09:37:46.238116  209832 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4818,"bootTime":1761725848,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1029 09:37:46.238189  209832 start.go:143] virtualization:  
	I1029 09:37:46.241209  209832 out.go:179] * [newest-cni-194729] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1029 09:37:46.245005  209832 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:37:46.245177  209832 notify.go:221] Checking for updates...
	I1029 09:37:46.250685  209832 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:37:46.253381  209832 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:37:46.256197  209832 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	I1029 09:37:46.259045  209832 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1029 09:37:46.261912  209832 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:37:46.265287  209832 config.go:182] Loaded profile config "newest-cni-194729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:37:46.265856  209832 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:37:46.289818  209832 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1029 09:37:46.289939  209832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:37:46.345735  209832 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-29 09:37:46.335965268 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:37:46.346114  209832 docker.go:319] overlay module found
	I1029 09:37:46.349330  209832 out.go:179] * Using the docker driver based on existing profile
	I1029 09:37:46.351394  209832 start.go:309] selected driver: docker
	I1029 09:37:46.351410  209832 start.go:930] validating driver "docker" against &{Name:newest-cni-194729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-194729 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:37:46.351561  209832 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:37:46.354073  209832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:37:46.408450  209832 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-29 09:37:46.398427712 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:37:46.408814  209832 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1029 09:37:46.408853  209832 cni.go:84] Creating CNI manager for ""
	I1029 09:37:46.408914  209832 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:37:46.408952  209832 start.go:353] cluster config:
	{Name:newest-cni-194729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-194729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:37:46.413981  209832 out.go:179] * Starting "newest-cni-194729" primary control-plane node in "newest-cni-194729" cluster
	I1029 09:37:46.417076  209832 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:37:46.420002  209832 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:37:46.422706  209832 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:37:46.422759  209832 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1029 09:37:46.422772  209832 cache.go:59] Caching tarball of preloaded images
	I1029 09:37:46.422811  209832 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:37:46.422857  209832 preload.go:233] Found /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1029 09:37:46.422868  209832 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 09:37:46.422986  209832 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/config.json ...
	I1029 09:37:46.442318  209832 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:37:46.442344  209832 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:37:46.442356  209832 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:37:46.442379  209832 start.go:360] acquireMachinesLock for newest-cni-194729: {Name:mkd3ffc0a88229da12feec44aaf76435e580410c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:37:46.442441  209832 start.go:364] duration metric: took 35.462µs to acquireMachinesLock for "newest-cni-194729"
	I1029 09:37:46.442464  209832 start.go:96] Skipping create...Using existing machine configuration
	I1029 09:37:46.442470  209832 fix.go:54] fixHost starting: 
	I1029 09:37:46.442740  209832 cli_runner.go:164] Run: docker container inspect newest-cni-194729 --format={{.State.Status}}
	I1029 09:37:46.460186  209832 fix.go:112] recreateIfNeeded on newest-cni-194729: state=Stopped err=<nil>
	W1029 09:37:46.460216  209832 fix.go:138] unexpected machine state, will restart: <nil>
	W1029 09:37:44.119549  202937 node_ready.go:57] node "default-k8s-diff-port-154565" has "Ready":"False" status (will retry)
	W1029 09:37:46.618579  202937 node_ready.go:57] node "default-k8s-diff-port-154565" has "Ready":"False" status (will retry)
	I1029 09:37:46.463583  209832 out.go:252] * Restarting existing docker container for "newest-cni-194729" ...
	I1029 09:37:46.463671  209832 cli_runner.go:164] Run: docker start newest-cni-194729
	I1029 09:37:46.720665  209832 cli_runner.go:164] Run: docker container inspect newest-cni-194729 --format={{.State.Status}}
	I1029 09:37:46.741747  209832 kic.go:430] container "newest-cni-194729" state is running.
	I1029 09:37:46.743379  209832 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-194729
	I1029 09:37:46.766284  209832 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/config.json ...
	I1029 09:37:46.766522  209832 machine.go:94] provisionDockerMachine start ...
	I1029 09:37:46.766630  209832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:46.788703  209832 main.go:143] libmachine: Using SSH client type: native
	I1029 09:37:46.789529  209832 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1029 09:37:46.789550  209832 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 09:37:46.790783  209832 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1029 09:37:49.940048  209832 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-194729
	
	I1029 09:37:49.940072  209832 ubuntu.go:182] provisioning hostname "newest-cni-194729"
	I1029 09:37:49.940142  209832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:49.958547  209832 main.go:143] libmachine: Using SSH client type: native
	I1029 09:37:49.958844  209832 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1029 09:37:49.958858  209832 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-194729 && echo "newest-cni-194729" | sudo tee /etc/hostname
	I1029 09:37:50.125915  209832 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-194729
	
	I1029 09:37:50.125991  209832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:50.143348  209832 main.go:143] libmachine: Using SSH client type: native
	I1029 09:37:50.143716  209832 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1029 09:37:50.143742  209832 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-194729' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-194729/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-194729' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 09:37:50.296645  209832 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 09:37:50.296673  209832 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-2763/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-2763/.minikube}
	I1029 09:37:50.296695  209832 ubuntu.go:190] setting up certificates
	I1029 09:37:50.296712  209832 provision.go:84] configureAuth start
	I1029 09:37:50.296776  209832 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-194729
	I1029 09:37:50.315356  209832 provision.go:143] copyHostCerts
	I1029 09:37:50.315432  209832 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem, removing ...
	I1029 09:37:50.315452  209832 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 09:37:50.315529  209832 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem (1082 bytes)
	I1029 09:37:50.315637  209832 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem, removing ...
	I1029 09:37:50.315648  209832 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 09:37:50.315677  209832 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem (1123 bytes)
	I1029 09:37:50.315745  209832 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem, removing ...
	I1029 09:37:50.315754  209832 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 09:37:50.315780  209832 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem (1679 bytes)
	I1029 09:37:50.315843  209832 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem org=jenkins.newest-cni-194729 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-194729]
	I1029 09:37:50.449730  209832 provision.go:177] copyRemoteCerts
	I1029 09:37:50.449795  209832 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 09:37:50.449833  209832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:50.471811  209832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/newest-cni-194729/id_rsa Username:docker}
	I1029 09:37:50.576205  209832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 09:37:50.593637  209832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1029 09:37:50.613347  209832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 09:37:50.633127  209832 provision.go:87] duration metric: took 336.388525ms to configureAuth
	I1029 09:37:50.633154  209832 ubuntu.go:206] setting minikube options for container-runtime
	I1029 09:37:50.633358  209832 config.go:182] Loaded profile config "newest-cni-194729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:37:50.633464  209832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:50.654940  209832 main.go:143] libmachine: Using SSH client type: native
	I1029 09:37:50.655265  209832 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1029 09:37:50.655285  209832 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 09:37:50.956582  209832 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 09:37:50.956607  209832 machine.go:97] duration metric: took 4.190076075s to provisionDockerMachine
	I1029 09:37:50.956618  209832 start.go:293] postStartSetup for "newest-cni-194729" (driver="docker")
	I1029 09:37:50.956630  209832 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 09:37:50.956704  209832 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 09:37:50.956768  209832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:50.974822  209832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/newest-cni-194729/id_rsa Username:docker}
	I1029 09:37:51.084528  209832 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 09:37:51.087877  209832 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 09:37:51.087904  209832 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 09:37:51.087915  209832 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/addons for local assets ...
	I1029 09:37:51.087968  209832 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/files for local assets ...
	I1029 09:37:51.088052  209832 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> 45502.pem in /etc/ssl/certs
	I1029 09:37:51.088168  209832 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 09:37:51.095685  209832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /etc/ssl/certs/45502.pem (1708 bytes)
	I1029 09:37:51.114042  209832 start.go:296] duration metric: took 157.407585ms for postStartSetup
	I1029 09:37:51.114118  209832 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 09:37:51.114169  209832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:51.133989  209832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/newest-cni-194729/id_rsa Username:docker}
	W1029 09:37:49.118820  202937 node_ready.go:57] node "default-k8s-diff-port-154565" has "Ready":"False" status (will retry)
	W1029 09:37:51.121757  202937 node_ready.go:57] node "default-k8s-diff-port-154565" has "Ready":"False" status (will retry)
	I1029 09:37:51.237457  209832 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 09:37:51.242441  209832 fix.go:56] duration metric: took 4.799963975s for fixHost
	I1029 09:37:51.242465  209832 start.go:83] releasing machines lock for "newest-cni-194729", held for 4.800011072s
	I1029 09:37:51.242558  209832 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-194729
	I1029 09:37:51.260407  209832 ssh_runner.go:195] Run: cat /version.json
	I1029 09:37:51.260464  209832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:51.260720  209832 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 09:37:51.260813  209832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:51.280046  209832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/newest-cni-194729/id_rsa Username:docker}
	I1029 09:37:51.288283  209832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/newest-cni-194729/id_rsa Username:docker}
	I1029 09:37:51.472583  209832 ssh_runner.go:195] Run: systemctl --version
	I1029 09:37:51.478953  209832 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 09:37:51.517013  209832 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 09:37:51.522072  209832 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 09:37:51.522141  209832 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 09:37:51.529971  209832 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1029 09:37:51.529995  209832 start.go:496] detecting cgroup driver to use...
	I1029 09:37:51.530045  209832 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1029 09:37:51.530098  209832 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 09:37:51.545795  209832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 09:37:51.559767  209832 docker.go:218] disabling cri-docker service (if available) ...
	I1029 09:37:51.559878  209832 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 09:37:51.575920  209832 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 09:37:51.589465  209832 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 09:37:51.718490  209832 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 09:37:51.842425  209832 docker.go:234] disabling docker service ...
	I1029 09:37:51.842535  209832 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 09:37:51.859778  209832 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 09:37:51.874643  209832 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 09:37:52.000985  209832 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 09:37:52.127826  209832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 09:37:52.141437  209832 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 09:37:52.158601  209832 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 09:37:52.158710  209832 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:37:52.168210  209832 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1029 09:37:52.168375  209832 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:37:52.178235  209832 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:37:52.189634  209832 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:37:52.198861  209832 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 09:37:52.207094  209832 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:37:52.218395  209832 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:37:52.226683  209832 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:37:52.235280  209832 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 09:37:52.243965  209832 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 09:37:52.251388  209832 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:37:52.375177  209832 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 09:37:52.502409  209832 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 09:37:52.502560  209832 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 09:37:52.507028  209832 start.go:564] Will wait 60s for crictl version
	I1029 09:37:52.507138  209832 ssh_runner.go:195] Run: which crictl
	I1029 09:37:52.513778  209832 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 09:37:52.541733  209832 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 09:37:52.541911  209832 ssh_runner.go:195] Run: crio --version
	I1029 09:37:52.573881  209832 ssh_runner.go:195] Run: crio --version
	I1029 09:37:52.614136  209832 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 09:37:52.617048  209832 cli_runner.go:164] Run: docker network inspect newest-cni-194729 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:37:52.633781  209832 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1029 09:37:52.637601  209832 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:37:52.650217  209832 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1029 09:37:52.652987  209832 kubeadm.go:884] updating cluster {Name:newest-cni-194729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-194729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 09:37:52.653130  209832 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:37:52.653221  209832 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:37:52.689629  209832 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:37:52.689652  209832 crio.go:433] Images already preloaded, skipping extraction
	I1029 09:37:52.689718  209832 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:37:52.720898  209832 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:37:52.720918  209832 cache_images.go:86] Images are preloaded, skipping loading
	I1029 09:37:52.720926  209832 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1029 09:37:52.721017  209832 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-194729 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-194729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 09:37:52.721092  209832 ssh_runner.go:195] Run: crio config
	I1029 09:37:52.775574  209832 cni.go:84] Creating CNI manager for ""
	I1029 09:37:52.775641  209832 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:37:52.775678  209832 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1029 09:37:52.775748  209832 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-194729 NodeName:newest-cni-194729 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 09:37:52.775912  209832 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-194729"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 09:37:52.776028  209832 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 09:37:52.785327  209832 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 09:37:52.785403  209832 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 09:37:52.793533  209832 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1029 09:37:52.806927  209832 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 09:37:52.821164  209832 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1029 09:37:52.835093  209832 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1029 09:37:52.838849  209832 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:37:52.849601  209832 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:37:52.972402  209832 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:37:52.989865  209832 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729 for IP: 192.168.85.2
	I1029 09:37:52.989889  209832 certs.go:195] generating shared ca certs ...
	I1029 09:37:52.989906  209832 certs.go:227] acquiring lock for ca certs: {Name:mk715611293338c39436fe9072c5ce0c2fb993a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:37:52.990032  209832 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key
	I1029 09:37:52.990077  209832 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key
	I1029 09:37:52.990088  209832 certs.go:257] generating profile certs ...
	I1029 09:37:52.990166  209832 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/client.key
	I1029 09:37:52.990244  209832 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/apiserver.key.f97f549a
	I1029 09:37:52.990292  209832 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/proxy-client.key
	I1029 09:37:52.990401  209832 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem (1338 bytes)
	W1029 09:37:52.990445  209832 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550_empty.pem, impossibly tiny 0 bytes
	I1029 09:37:52.990459  209832 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem (1679 bytes)
	I1029 09:37:52.990486  209832 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem (1082 bytes)
	I1029 09:37:52.990511  209832 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem (1123 bytes)
	I1029 09:37:52.990535  209832 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem (1679 bytes)
	I1029 09:37:52.990585  209832 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem (1708 bytes)
	I1029 09:37:52.991152  209832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 09:37:53.019704  209832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 09:37:53.043086  209832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 09:37:53.065990  209832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1029 09:37:53.089826  209832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1029 09:37:53.113439  209832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 09:37:53.135950  209832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 09:37:53.163790  209832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/newest-cni-194729/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1029 09:37:53.188211  209832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem --> /usr/share/ca-certificates/4550.pem (1338 bytes)
	I1029 09:37:53.211390  209832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /usr/share/ca-certificates/45502.pem (1708 bytes)
	I1029 09:37:53.234831  209832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 09:37:53.253936  209832 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 09:37:53.270571  209832 ssh_runner.go:195] Run: openssl version
	I1029 09:37:53.277026  209832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 09:37:53.288543  209832 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:37:53.292163  209832 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:37:53.292267  209832 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:37:53.339307  209832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 09:37:53.352376  209832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4550.pem && ln -fs /usr/share/ca-certificates/4550.pem /etc/ssl/certs/4550.pem"
	I1029 09:37:53.362884  209832 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4550.pem
	I1029 09:37:53.367195  209832 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:27 /usr/share/ca-certificates/4550.pem
	I1029 09:37:53.367270  209832 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4550.pem
	I1029 09:37:53.409375  209832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4550.pem /etc/ssl/certs/51391683.0"
	I1029 09:37:53.417386  209832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/45502.pem && ln -fs /usr/share/ca-certificates/45502.pem /etc/ssl/certs/45502.pem"
	I1029 09:37:53.425810  209832 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/45502.pem
	I1029 09:37:53.429463  209832 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:27 /usr/share/ca-certificates/45502.pem
	I1029 09:37:53.429581  209832 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/45502.pem
	I1029 09:37:53.475895  209832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/45502.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 09:37:53.483702  209832 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 09:37:53.488410  209832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1029 09:37:53.529515  209832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1029 09:37:53.578455  209832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1029 09:37:53.622347  209832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1029 09:37:53.691876  209832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1029 09:37:53.742113  209832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1029 09:37:53.854449  209832 kubeadm.go:401] StartCluster: {Name:newest-cni-194729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-194729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:37:53.854550  209832 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 09:37:53.854643  209832 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 09:37:53.913359  209832 cri.go:89] found id: "14415e882c21abfaeb36511e3144bac1d6977e095c747a0d5797c597e8b5f6a2"
	I1029 09:37:53.913395  209832 cri.go:89] found id: "b4fa523dc72d03d9894efe1c083692461564890ac2212d9c1f44a74d1e81e268"
	I1029 09:37:53.913401  209832 cri.go:89] found id: "2bc825d65f39d967f59d22d108f0a7e5b41960b623c3cac303a998196c5da097"
	I1029 09:37:53.913406  209832 cri.go:89] found id: "3f699bcbf29302709f491025ce5a2e03043b5bd782958bc0c4354f91b754daf7"
	I1029 09:37:53.913409  209832 cri.go:89] found id: ""
	I1029 09:37:53.913467  209832 ssh_runner.go:195] Run: sudo runc list -f json
	W1029 09:37:53.931406  209832 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:37:53Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:37:53.931515  209832 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 09:37:53.946080  209832 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1029 09:37:53.946153  209832 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1029 09:37:53.946234  209832 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1029 09:37:53.959543  209832 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1029 09:37:53.960205  209832 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-194729" does not appear in /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:37:53.960519  209832 kubeconfig.go:62] /home/jenkins/minikube-integration/21800-2763/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-194729" cluster setting kubeconfig missing "newest-cni-194729" context setting]
	I1029 09:37:53.960973  209832 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:37:53.962573  209832 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1029 09:37:53.993252  209832 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1029 09:37:53.993329  209832 kubeadm.go:602] duration metric: took 47.155238ms to restartPrimaryControlPlane
	I1029 09:37:53.993355  209832 kubeadm.go:403] duration metric: took 138.917319ms to StartCluster
	I1029 09:37:53.993402  209832 settings.go:142] acquiring lock: {Name:mkeba1c0d8e3656c2a05b6b6f1f81184498df216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:37:53.993484  209832 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:37:53.994451  209832 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:37:53.994711  209832 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:37:53.995095  209832 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 09:37:53.995163  209832 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-194729"
	I1029 09:37:53.995181  209832 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-194729"
	W1029 09:37:53.995187  209832 addons.go:248] addon storage-provisioner should already be in state true
	I1029 09:37:53.995208  209832 host.go:66] Checking if "newest-cni-194729" exists ...
	I1029 09:37:53.995663  209832 cli_runner.go:164] Run: docker container inspect newest-cni-194729 --format={{.State.Status}}
	I1029 09:37:53.996052  209832 config.go:182] Loaded profile config "newest-cni-194729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:37:53.996131  209832 addons.go:70] Setting dashboard=true in profile "newest-cni-194729"
	I1029 09:37:53.996168  209832 addons.go:239] Setting addon dashboard=true in "newest-cni-194729"
	W1029 09:37:53.996193  209832 addons.go:248] addon dashboard should already be in state true
	I1029 09:37:53.996233  209832 host.go:66] Checking if "newest-cni-194729" exists ...
	I1029 09:37:53.996684  209832 cli_runner.go:164] Run: docker container inspect newest-cni-194729 --format={{.State.Status}}
	I1029 09:37:53.998784  209832 addons.go:70] Setting default-storageclass=true in profile "newest-cni-194729"
	I1029 09:37:53.998841  209832 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-194729"
	I1029 09:37:53.999157  209832 cli_runner.go:164] Run: docker container inspect newest-cni-194729 --format={{.State.Status}}
	I1029 09:37:53.999215  209832 out.go:179] * Verifying Kubernetes components...
	I1029 09:37:54.011735  209832 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:37:54.050279  209832 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1029 09:37:54.054200  209832 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 09:37:54.057502  209832 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1029 09:37:54.057808  209832 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:37:54.057824  209832 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1029 09:37:54.057889  209832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:54.062144  209832 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1029 09:37:54.062170  209832 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1029 09:37:54.062237  209832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:54.066826  209832 addons.go:239] Setting addon default-storageclass=true in "newest-cni-194729"
	W1029 09:37:54.066847  209832 addons.go:248] addon default-storageclass should already be in state true
	I1029 09:37:54.066872  209832 host.go:66] Checking if "newest-cni-194729" exists ...
	I1029 09:37:54.067315  209832 cli_runner.go:164] Run: docker container inspect newest-cni-194729 --format={{.State.Status}}
	I1029 09:37:54.129737  209832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/newest-cni-194729/id_rsa Username:docker}
	I1029 09:37:54.139439  209832 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1029 09:37:54.139459  209832 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1029 09:37:54.139519  209832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-194729
	I1029 09:37:54.142634  209832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/newest-cni-194729/id_rsa Username:docker}
	I1029 09:37:54.170978  209832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/newest-cni-194729/id_rsa Username:docker}
	I1029 09:37:54.383048  209832 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:37:54.397765  209832 api_server.go:52] waiting for apiserver process to appear ...
	I1029 09:37:54.397902  209832 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:37:54.416908  209832 api_server.go:72] duration metric: took 422.127604ms to wait for apiserver process to appear ...
	I1029 09:37:54.416980  209832 api_server.go:88] waiting for apiserver healthz status ...
	I1029 09:37:54.417012  209832 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:37:54.428793  209832 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1029 09:37:54.428814  209832 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1029 09:37:54.447704  209832 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:37:54.464150  209832 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1029 09:37:54.517321  209832 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1029 09:37:54.517341  209832 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1029 09:37:54.585479  209832 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1029 09:37:54.585508  209832 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1029 09:37:54.636632  209832 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1029 09:37:54.636663  209832 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1029 09:37:54.671039  209832 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1029 09:37:54.671078  209832 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1029 09:37:54.691862  209832 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1029 09:37:54.691898  209832 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1029 09:37:54.717320  209832 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1029 09:37:54.717345  209832 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1029 09:37:54.742088  209832 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1029 09:37:54.742123  209832 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1029 09:37:54.766963  209832 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1029 09:37:54.766989  209832 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1029 09:37:54.791694  209832 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1029 09:37:53.619629  202937 node_ready.go:57] node "default-k8s-diff-port-154565" has "Ready":"False" status (will retry)
	W1029 09:37:56.118884  202937 node_ready.go:57] node "default-k8s-diff-port-154565" has "Ready":"False" status (will retry)
	I1029 09:37:58.554006  209832 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1029 09:37:58.554036  209832 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1029 09:37:58.554051  209832 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:37:59.070764  209832 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 09:37:59.070803  209832 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1029 09:37:59.070818  209832 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:37:59.101328  209832 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 09:37:59.101358  209832 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1029 09:37:59.417822  209832 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:37:59.434438  209832 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 09:37:59.434480  209832 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1029 09:37:59.917826  209832 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:37:59.937364  209832 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 09:37:59.937392  209832 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1029 09:38:00.417941  209832 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:38:00.453237  209832 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 09:38:00.453265  209832 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1029 09:38:00.545865  209832 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.098130712s)
	I1029 09:38:00.545938  209832 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.08176917s)
	I1029 09:38:00.546332  209832 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.754586379s)
	I1029 09:38:00.549637  209832 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-194729 addons enable metrics-server
	
	I1029 09:38:00.572623  209832 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1029 09:38:00.575643  209832 addons.go:515] duration metric: took 6.580531736s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1029 09:38:00.917266  209832 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:38:00.928941  209832 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1029 09:38:00.930425  209832 api_server.go:141] control plane version: v1.34.1
	I1029 09:38:00.930516  209832 api_server.go:131] duration metric: took 6.513506521s to wait for apiserver health ...
	I1029 09:38:00.930543  209832 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 09:38:00.934799  209832 system_pods.go:59] 8 kube-system pods found
	I1029 09:38:00.934834  209832 system_pods.go:61] "coredns-66bc5c9577-xw4k2" [16536d62-45b6-4dbb-a119-2f03bc0dab76] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1029 09:38:00.934844  209832 system_pods.go:61] "etcd-newest-cni-194729" [a73be25e-1001-47bc-a73e-81d0b4a407a5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:38:00.934849  209832 system_pods.go:61] "kindnet-4qfvm" [aaa1a0aa-75fc-418d-b140-ffa0a0dfe864] Running
	I1029 09:38:00.934856  209832 system_pods.go:61] "kube-apiserver-newest-cni-194729" [ac1d73e9-32a0-47f5-9b54-d3e7441d00c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:38:00.934862  209832 system_pods.go:61] "kube-controller-manager-newest-cni-194729" [c77285b8-4ae6-4f1b-8552-2c45a600d458] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:38:00.934868  209832 system_pods.go:61] "kube-proxy-grr4p" [55f7dc3f-12ef-4f5e-a6ad-fe25dc8c11ad] Running
	I1029 09:38:00.934880  209832 system_pods.go:61] "kube-scheduler-newest-cni-194729" [189fc533-1dab-4e25-8187-da8f16a8a131] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:38:00.934885  209832 system_pods.go:61] "storage-provisioner" [a55079c0-4415-4c57-b3db-6c95a7876df1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1029 09:38:00.934892  209832 system_pods.go:74] duration metric: took 4.328835ms to wait for pod list to return data ...
	I1029 09:38:00.934906  209832 default_sa.go:34] waiting for default service account to be created ...
	I1029 09:38:00.937100  209832 default_sa.go:45] found service account: "default"
	I1029 09:38:00.937136  209832 default_sa.go:55] duration metric: took 2.223708ms for default service account to be created ...
	I1029 09:38:00.937149  209832 kubeadm.go:587] duration metric: took 6.942389123s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1029 09:38:00.937165  209832 node_conditions.go:102] verifying NodePressure condition ...
	I1029 09:38:00.939444  209832 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1029 09:38:00.939479  209832 node_conditions.go:123] node cpu capacity is 2
	I1029 09:38:00.939493  209832 node_conditions.go:105] duration metric: took 2.322951ms to run NodePressure ...
	I1029 09:38:00.939513  209832 start.go:242] waiting for startup goroutines ...
	I1029 09:38:00.939526  209832 start.go:247] waiting for cluster config update ...
	I1029 09:38:00.939539  209832 start.go:256] writing updated cluster config ...
	I1029 09:38:00.939872  209832 ssh_runner.go:195] Run: rm -f paused
	I1029 09:38:01.011956  209832 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1029 09:38:01.017133  209832 out.go:179] * Done! kubectl is now configured to use "newest-cni-194729" cluster and "default" namespace by default
	W1029 09:37:58.619040  202937 node_ready.go:57] node "default-k8s-diff-port-154565" has "Ready":"False" status (will retry)
	W1029 09:38:00.619835  202937 node_ready.go:57] node "default-k8s-diff-port-154565" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.411522224Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.416090741Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-grr4p/POD" id=fe069ffd-fb48-417d-969e-3c64c7084299 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.416158516Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.436778287Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e8316903-7939-4898-870d-203fe1bc64ba name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.444633882Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=fe069ffd-fb48-417d-969e-3c64c7084299 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.470124056Z" level=info msg="Ran pod sandbox 41e71ebfcaddff5ebc8eda68751a2c3b9785bfbfa3a818ff0120c80b5a276319 with infra container: kube-system/kindnet-4qfvm/POD" id=e8316903-7939-4898-870d-203fe1bc64ba name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.473591508Z" level=info msg="Ran pod sandbox 3c2bf2fe4e94ede75df53cdf1c53738cbc012bd47bc6887c9cdc33e6ca8dba46 with infra container: kube-system/kube-proxy-grr4p/POD" id=fe069ffd-fb48-417d-969e-3c64c7084299 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.488805578Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=e8fe9219-f2b9-4e90-a5aa-7fb667905eac name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.48911804Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=24764a62-22b8-4d4c-9a9e-3bfa08c50389 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.490603452Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=ba88c1a4-253f-4e4e-b92c-5213d7180f73 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.491890069Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=7d91bec2-d6fe-4e50-a99c-80af79f639ae name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.493891999Z" level=info msg="Creating container: kube-system/kube-proxy-grr4p/kube-proxy" id=3a958890-989e-4dad-b3b0-fa0aeb7cb594 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.493990584Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.498858156Z" level=info msg="Creating container: kube-system/kindnet-4qfvm/kindnet-cni" id=61c93bdd-2639-4072-a345-79cf6d7048c8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.499145215Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.522744271Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.523263668Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.524183792Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.52544663Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.564048954Z" level=info msg="Created container 78ea18453b16f09be64fa96ee34cd1c75e82d86e539174437bec994605f727cf: kube-system/kindnet-4qfvm/kindnet-cni" id=61c93bdd-2639-4072-a345-79cf6d7048c8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.568547514Z" level=info msg="Starting container: 78ea18453b16f09be64fa96ee34cd1c75e82d86e539174437bec994605f727cf" id=502143b3-a7d2-4c0c-bb80-bfba2042e94e name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.576803367Z" level=info msg="Started container" PID=1064 containerID=78ea18453b16f09be64fa96ee34cd1c75e82d86e539174437bec994605f727cf description=kube-system/kindnet-4qfvm/kindnet-cni id=502143b3-a7d2-4c0c-bb80-bfba2042e94e name=/runtime.v1.RuntimeService/StartContainer sandboxID=41e71ebfcaddff5ebc8eda68751a2c3b9785bfbfa3a818ff0120c80b5a276319
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.670508604Z" level=info msg="Created container 67ac493578abd7ea022b8ea1e8b902596013c41ae6c03d4bba3596e07c6a14d6: kube-system/kube-proxy-grr4p/kube-proxy" id=3a958890-989e-4dad-b3b0-fa0aeb7cb594 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.678015248Z" level=info msg="Starting container: 67ac493578abd7ea022b8ea1e8b902596013c41ae6c03d4bba3596e07c6a14d6" id=bfeac4b2-550d-40cd-81ff-053463397cec name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:37:59 newest-cni-194729 crio[611]: time="2025-10-29T09:37:59.68959415Z" level=info msg="Started container" PID=1067 containerID=67ac493578abd7ea022b8ea1e8b902596013c41ae6c03d4bba3596e07c6a14d6 description=kube-system/kube-proxy-grr4p/kube-proxy id=bfeac4b2-550d-40cd-81ff-053463397cec name=/runtime.v1.RuntimeService/StartContainer sandboxID=3c2bf2fe4e94ede75df53cdf1c53738cbc012bd47bc6887c9cdc33e6ca8dba46
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	67ac493578abd       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 seconds ago       Running             kube-proxy                1                   3c2bf2fe4e94e       kube-proxy-grr4p                            kube-system
	78ea18453b16f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 seconds ago       Running             kindnet-cni               1                   41e71ebfcaddf       kindnet-4qfvm                               kube-system
	14415e882c21a       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   13 seconds ago      Running             etcd                      1                   8a6f69794b305       etcd-newest-cni-194729                      kube-system
	b4fa523dc72d0       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   13 seconds ago      Running             kube-scheduler            1                   7eaad0cc42a09       kube-scheduler-newest-cni-194729            kube-system
	2bc825d65f39d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   13 seconds ago      Running             kube-controller-manager   1                   f09534d9c2fca       kube-controller-manager-newest-cni-194729   kube-system
	3f699bcbf2930       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   13 seconds ago      Running             kube-apiserver            1                   e44cf3d897afa       kube-apiserver-newest-cni-194729            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-194729
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-194729
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=newest-cni-194729
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_37_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:37:32 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-194729
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:37:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:37:59 +0000   Wed, 29 Oct 2025 09:37:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:37:59 +0000   Wed, 29 Oct 2025 09:37:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:37:59 +0000   Wed, 29 Oct 2025 09:37:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 29 Oct 2025 09:37:59 +0000   Wed, 29 Oct 2025 09:37:28 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-194729
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                c2631ecb-f2d1-41a4-93ae-1b71955be2b7
	  Boot ID:                    dcb5a4aa-84b0-4edf-aeb7-a96ccf6c0882
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-194729                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         32s
	  kube-system                 kindnet-4qfvm                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-newest-cni-194729             250m (12%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-newest-cni-194729    200m (10%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-grr4p                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-newest-cni-194729             100m (5%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 25s                kube-proxy       
	  Normal   Starting                 6s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  39s (x8 over 39s)  kubelet          Node newest-cni-194729 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    39s (x8 over 39s)  kubelet          Node newest-cni-194729 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     39s (x8 over 39s)  kubelet          Node newest-cni-194729 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    32s                kubelet          Node newest-cni-194729 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 32s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  32s                kubelet          Node newest-cni-194729 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     32s                kubelet          Node newest-cni-194729 status is now: NodeHasSufficientPID
	  Normal   Starting                 32s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           28s                node-controller  Node newest-cni-194729 event: Registered Node newest-cni-194729 in Controller
	  Normal   Starting                 14s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 14s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14s (x8 over 14s)  kubelet          Node newest-cni-194729 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14s (x8 over 14s)  kubelet          Node newest-cni-194729 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14s (x8 over 14s)  kubelet          Node newest-cni-194729 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-194729 event: Registered Node newest-cni-194729 in Controller
	
	
	==> dmesg <==
	[  +4.070732] overlayfs: idmapped layers are currently not supported
	[Oct29 09:11] overlayfs: idmapped layers are currently not supported
	[ +18.424492] overlayfs: idmapped layers are currently not supported
	[  +4.342269] hrtimer: interrupt took 2289025 ns
	[Oct29 09:12] overlayfs: idmapped layers are currently not supported
	[Oct29 09:13] overlayfs: idmapped layers are currently not supported
	[Oct29 09:14] overlayfs: idmapped layers are currently not supported
	[Oct29 09:20] overlayfs: idmapped layers are currently not supported
	[Oct29 09:23] overlayfs: idmapped layers are currently not supported
	[Oct29 09:24] overlayfs: idmapped layers are currently not supported
	[ +30.917844] overlayfs: idmapped layers are currently not supported
	[Oct29 09:27] overlayfs: idmapped layers are currently not supported
	[Oct29 09:29] overlayfs: idmapped layers are currently not supported
	[Oct29 09:30] overlayfs: idmapped layers are currently not supported
	[  +5.608805] overlayfs: idmapped layers are currently not supported
	[ +37.422429] overlayfs: idmapped layers are currently not supported
	[Oct29 09:31] overlayfs: idmapped layers are currently not supported
	[Oct29 09:32] overlayfs: idmapped layers are currently not supported
	[Oct29 09:34] overlayfs: idmapped layers are currently not supported
	[ +22.728709] overlayfs: idmapped layers are currently not supported
	[Oct29 09:35] overlayfs: idmapped layers are currently not supported
	[ +21.902387] overlayfs: idmapped layers are currently not supported
	[Oct29 09:37] overlayfs: idmapped layers are currently not supported
	[ +19.842209] overlayfs: idmapped layers are currently not supported
	[ +25.062735] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [14415e882c21abfaeb36511e3144bac1d6977e095c747a0d5797c597e8b5f6a2] <==
	{"level":"warn","ts":"2025-10-29T09:37:57.462539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.480548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.497283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.516613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.532479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.545905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.564960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.584801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.605222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.633214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.657567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.671450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.680601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.701228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.713617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.730854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.747854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.762828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.777682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.794775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.829963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.850394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.864260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.878327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:57.934572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42494","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:38:07 up  1:20,  0 user,  load average: 3.94, 3.89, 3.02
	Linux newest-cni-194729 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [78ea18453b16f09be64fa96ee34cd1c75e82d86e539174437bec994605f727cf] <==
	I1029 09:37:59.754668       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:37:59.755008       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1029 09:37:59.755105       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:37:59.755117       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:37:59.755130       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:37:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:37:59.952661       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:37:59.952679       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:37:59.952777       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:37:59.954508       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [3f699bcbf29302709f491025ce5a2e03043b5bd782958bc0c4354f91b754daf7] <==
	I1029 09:37:58.988846       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1029 09:37:58.842750       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1029 09:37:58.989044       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1029 09:37:58.841321       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1029 09:37:58.842670       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1029 09:37:59.041783       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1029 09:37:59.041892       1 aggregator.go:171] initial CRD sync complete...
	I1029 09:37:59.041903       1 autoregister_controller.go:144] Starting autoregister controller
	I1029 09:37:59.041910       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1029 09:37:59.041916       1 cache.go:39] Caches are synced for autoregister controller
	I1029 09:37:59.046708       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1029 09:37:59.090457       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1029 09:37:59.186488       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 09:37:59.526800       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:37:59.815578       1 controller.go:667] quota admission added evaluator for: namespaces
	I1029 09:37:59.908237       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1029 09:37:59.980570       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:38:00.017660       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 09:38:00.363118       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.14.58"}
	I1029 09:38:00.428457       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.75.113"}
	I1029 09:38:02.518049       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1029 09:38:02.567128       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1029 09:38:02.567128       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1029 09:38:02.626336       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1029 09:38:02.672193       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [2bc825d65f39d967f59d22d108f0a7e5b41960b623c3cac303a998196c5da097] <==
	I1029 09:38:02.100081       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:38:02.100122       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1029 09:38:02.100132       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1029 09:38:02.105271       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1029 09:38:02.111058       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1029 09:38:02.112844       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1029 09:38:02.115018       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1029 09:38:02.115172       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1029 09:38:02.115639       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-194729"
	I1029 09:38:02.115723       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1029 09:38:02.115787       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1029 09:38:02.115956       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1029 09:38:02.134789       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1029 09:38:02.134957       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:38:02.137463       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1029 09:38:02.139688       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1029 09:38:02.141987       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1029 09:38:02.152104       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1029 09:38:02.152224       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1029 09:38:02.152237       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1029 09:38:02.155762       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1029 09:38:02.160425       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1029 09:38:02.170322       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:38:02.170468       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1029 09:38:02.174460       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [67ac493578abd7ea022b8ea1e8b902596013c41ae6c03d4bba3596e07c6a14d6] <==
	I1029 09:37:59.840368       1 server_linux.go:53] "Using iptables proxy"
	I1029 09:37:59.961125       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 09:38:00.079765       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 09:38:00.079812       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1029 09:38:00.079885       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 09:38:00.379197       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:38:00.379356       1 server_linux.go:132] "Using iptables Proxier"
	I1029 09:38:00.393010       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 09:38:00.393466       1 server.go:527] "Version info" version="v1.34.1"
	I1029 09:38:00.393487       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:38:00.400199       1 config.go:200] "Starting service config controller"
	I1029 09:38:00.412277       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 09:38:00.432499       1 config.go:106] "Starting endpoint slice config controller"
	I1029 09:38:00.432523       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 09:38:00.432601       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 09:38:00.432607       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 09:38:00.433563       1 config.go:309] "Starting node config controller"
	I1029 09:38:00.433572       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 09:38:00.433579       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 09:38:00.516008       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 09:38:00.533482       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 09:38:00.533529       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b4fa523dc72d03d9894efe1c083692461564890ac2212d9c1f44a74d1e81e268] <==
	I1029 09:37:55.265556       1 serving.go:386] Generated self-signed cert in-memory
	W1029 09:37:58.632853       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1029 09:37:58.632890       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1029 09:37:58.632900       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1029 09:37:58.632909       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1029 09:37:58.796927       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1029 09:37:58.796959       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:37:58.808273       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:37:58.808339       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:37:58.809419       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1029 09:37:58.812110       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1029 09:37:58.911140       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 09:37:57 newest-cni-194729 kubelet[728]: E1029 09:37:57.536019     728 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-194729\" not found" node="newest-cni-194729"
	Oct 29 09:37:58 newest-cni-194729 kubelet[728]: I1029 09:37:58.638211     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-194729"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: I1029 09:37:59.088116     728 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-194729"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: I1029 09:37:59.088223     728 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-194729"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: I1029 09:37:59.088250     728 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: I1029 09:37:59.089290     728 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: I1029 09:37:59.099933     728 apiserver.go:52] "Watching apiserver"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: E1029 09:37:59.105213     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-194729\" already exists" pod="kube-system/kube-scheduler-newest-cni-194729"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: I1029 09:37:59.105242     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-194729"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: I1029 09:37:59.130737     728 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: E1029 09:37:59.135386     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-194729\" already exists" pod="kube-system/etcd-newest-cni-194729"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: I1029 09:37:59.135430     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-194729"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: E1029 09:37:59.156112     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-194729\" already exists" pod="kube-system/kube-apiserver-newest-cni-194729"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: I1029 09:37:59.156145     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-194729"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: I1029 09:37:59.173336     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55f7dc3f-12ef-4f5e-a6ad-fe25dc8c11ad-xtables-lock\") pod \"kube-proxy-grr4p\" (UID: \"55f7dc3f-12ef-4f5e-a6ad-fe25dc8c11ad\") " pod="kube-system/kube-proxy-grr4p"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: I1029 09:37:59.173384     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/aaa1a0aa-75fc-418d-b140-ffa0a0dfe864-cni-cfg\") pod \"kindnet-4qfvm\" (UID: \"aaa1a0aa-75fc-418d-b140-ffa0a0dfe864\") " pod="kube-system/kindnet-4qfvm"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: I1029 09:37:59.173406     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aaa1a0aa-75fc-418d-b140-ffa0a0dfe864-lib-modules\") pod \"kindnet-4qfvm\" (UID: \"aaa1a0aa-75fc-418d-b140-ffa0a0dfe864\") " pod="kube-system/kindnet-4qfvm"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: I1029 09:37:59.173452     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55f7dc3f-12ef-4f5e-a6ad-fe25dc8c11ad-lib-modules\") pod \"kube-proxy-grr4p\" (UID: \"55f7dc3f-12ef-4f5e-a6ad-fe25dc8c11ad\") " pod="kube-system/kube-proxy-grr4p"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: I1029 09:37:59.173486     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aaa1a0aa-75fc-418d-b140-ffa0a0dfe864-xtables-lock\") pod \"kindnet-4qfvm\" (UID: \"aaa1a0aa-75fc-418d-b140-ffa0a0dfe864\") " pod="kube-system/kindnet-4qfvm"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: E1029 09:37:59.173772     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-194729\" already exists" pod="kube-system/kube-controller-manager-newest-cni-194729"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: I1029 09:37:59.242450     728 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 29 09:37:59 newest-cni-194729 kubelet[728]: W1029 09:37:59.458551     728 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/e7978179791b35aa9070cf8de2c6631cc56026581af856b7eef35f6a16a4fbd5/crio-41e71ebfcaddff5ebc8eda68751a2c3b9785bfbfa3a818ff0120c80b5a276319 WatchSource:0}: Error finding container 41e71ebfcaddff5ebc8eda68751a2c3b9785bfbfa3a818ff0120c80b5a276319: Status 404 returned error can't find the container with id 41e71ebfcaddff5ebc8eda68751a2c3b9785bfbfa3a818ff0120c80b5a276319
	Oct 29 09:38:02 newest-cni-194729 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 29 09:38:02 newest-cni-194729 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 29 09:38:02 newest-cni-194729 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-194729 -n newest-cni-194729
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-194729 -n newest-cni-194729: exit status 2 (408.67849ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-194729 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-xw4k2 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-65slf kubernetes-dashboard-855c9754f9-wwlgl
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-194729 describe pod coredns-66bc5c9577-xw4k2 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-65slf kubernetes-dashboard-855c9754f9-wwlgl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-194729 describe pod coredns-66bc5c9577-xw4k2 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-65slf kubernetes-dashboard-855c9754f9-wwlgl: exit status 1 (95.453209ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-xw4k2" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-65slf" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-wwlgl" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-194729 describe pod coredns-66bc5c9577-xw4k2 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-65slf kubernetes-dashboard-855c9754f9-wwlgl: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-154565 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-154565 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (281.027646ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:38:18Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-154565 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-154565 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-154565 describe deploy/metrics-server -n kube-system: exit status 1 (75.757189ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-154565 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-154565
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-154565:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dfc2c419fe4814caa3f5a0bc7a2ac4b24be2798cd8aa60ffeba23c8cc25c3683",
	        "Created": "2025-10-29T09:36:47.880643174Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 203389,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:36:47.961318204Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/dfc2c419fe4814caa3f5a0bc7a2ac4b24be2798cd8aa60ffeba23c8cc25c3683/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dfc2c419fe4814caa3f5a0bc7a2ac4b24be2798cd8aa60ffeba23c8cc25c3683/hostname",
	        "HostsPath": "/var/lib/docker/containers/dfc2c419fe4814caa3f5a0bc7a2ac4b24be2798cd8aa60ffeba23c8cc25c3683/hosts",
	        "LogPath": "/var/lib/docker/containers/dfc2c419fe4814caa3f5a0bc7a2ac4b24be2798cd8aa60ffeba23c8cc25c3683/dfc2c419fe4814caa3f5a0bc7a2ac4b24be2798cd8aa60ffeba23c8cc25c3683-json.log",
	        "Name": "/default-k8s-diff-port-154565",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-154565:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-154565",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dfc2c419fe4814caa3f5a0bc7a2ac4b24be2798cd8aa60ffeba23c8cc25c3683",
	                "LowerDir": "/var/lib/docker/overlay2/9ba4e32c5a57a2a0e65d7ce595e96b480a301690f5c728e704090e910736b869-init/diff:/var/lib/docker/overlay2/512c003c31c6c889d7180677d0a7d04f9641d651bb7c0f829b95ca7e47c2836c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9ba4e32c5a57a2a0e65d7ce595e96b480a301690f5c728e704090e910736b869/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9ba4e32c5a57a2a0e65d7ce595e96b480a301690f5c728e704090e910736b869/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9ba4e32c5a57a2a0e65d7ce595e96b480a301690f5c728e704090e910736b869/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-154565",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-154565/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-154565",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-154565",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-154565",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cdb9ce80ab4cc4306a7df1c62dc90769e3c68940d874f03bf6692165ac44682f",
	            "SandboxKey": "/var/run/docker/netns/cdb9ce80ab4c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-154565": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:5f:9f:95:dd:7c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c3acff3dac19998e01d626c0b1e4f259c12319017d7e423e1cda5eea55f18a36",
	                    "EndpointID": "9513ff5f6ffa19e91417bdf4d7bc4ed0d126359a0cc904dcf5800d7142834237",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-154565",
	                        "dfc2c419fe48"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-154565 -n default-k8s-diff-port-154565
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-154565 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-154565 logs -n 25: (1.136954304s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-946178 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │                     │
	│ stop    │ -p embed-certs-946178 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-946178 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:35 UTC │
	│ start   │ -p embed-certs-946178 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:35 UTC │ 29 Oct 25 09:36 UTC │
	│ image   │ no-preload-505993 image list --format=json                                                                                                                                                                                                    │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ pause   │ -p no-preload-505993 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │                     │
	│ delete  │ -p no-preload-505993                                                                                                                                                                                                                          │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ delete  │ -p no-preload-505993                                                                                                                                                                                                                          │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ delete  │ -p disable-driver-mounts-012564                                                                                                                                                                                                               │ disable-driver-mounts-012564 │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ start   │ -p default-k8s-diff-port-154565 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-154565 │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:38 UTC │
	│ image   │ embed-certs-946178 image list --format=json                                                                                                                                                                                                   │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ pause   │ -p embed-certs-946178 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │                     │
	│ delete  │ -p embed-certs-946178                                                                                                                                                                                                                         │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:37 UTC │
	│ delete  │ -p embed-certs-946178                                                                                                                                                                                                                         │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:37 UTC │ 29 Oct 25 09:37 UTC │
	│ start   │ -p newest-cni-194729 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:37 UTC │ 29 Oct 25 09:37 UTC │
	│ addons  │ enable metrics-server -p newest-cni-194729 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:37 UTC │                     │
	│ stop    │ -p newest-cni-194729 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:37 UTC │ 29 Oct 25 09:37 UTC │
	│ addons  │ enable dashboard -p newest-cni-194729 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:37 UTC │ 29 Oct 25 09:37 UTC │
	│ start   │ -p newest-cni-194729 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:37 UTC │ 29 Oct 25 09:38 UTC │
	│ image   │ newest-cni-194729 image list --format=json                                                                                                                                                                                                    │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:38 UTC │ 29 Oct 25 09:38 UTC │
	│ pause   │ -p newest-cni-194729 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:38 UTC │                     │
	│ delete  │ -p newest-cni-194729                                                                                                                                                                                                                          │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:38 UTC │ 29 Oct 25 09:38 UTC │
	│ delete  │ -p newest-cni-194729                                                                                                                                                                                                                          │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:38 UTC │ 29 Oct 25 09:38 UTC │
	│ start   │ -p auto-937200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-937200                  │ jenkins │ v1.37.0 │ 29 Oct 25 09:38 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-154565 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-154565 │ jenkins │ v1.37.0 │ 29 Oct 25 09:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:38:10
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:38:10.427350  213005 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:38:10.427460  213005 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:38:10.427472  213005 out.go:374] Setting ErrFile to fd 2...
	I1029 09:38:10.427477  213005 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:38:10.427712  213005 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 09:38:10.428118  213005 out.go:368] Setting JSON to false
	I1029 09:38:10.429056  213005 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4842,"bootTime":1761725848,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1029 09:38:10.429127  213005 start.go:143] virtualization:  
	I1029 09:38:10.432945  213005 out.go:179] * [auto-937200] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1029 09:38:10.436055  213005 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:38:10.436193  213005 notify.go:221] Checking for updates...
	I1029 09:38:10.442277  213005 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:38:10.445231  213005 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:38:10.448279  213005 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	I1029 09:38:10.451174  213005 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1029 09:38:10.454119  213005 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:38:10.457559  213005 config.go:182] Loaded profile config "default-k8s-diff-port-154565": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:38:10.457664  213005 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:38:10.493267  213005 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1029 09:38:10.493391  213005 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:38:10.550046  213005 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-29 09:38:10.540877816 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:38:10.550152  213005 docker.go:319] overlay module found
	I1029 09:38:10.553379  213005 out.go:179] * Using the docker driver based on user configuration
	I1029 09:38:10.556182  213005 start.go:309] selected driver: docker
	I1029 09:38:10.556206  213005 start.go:930] validating driver "docker" against <nil>
	I1029 09:38:10.556220  213005 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:38:10.557001  213005 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:38:10.621527  213005 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-29 09:38:10.611718242 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:38:10.621683  213005 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1029 09:38:10.621919  213005 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:38:10.624761  213005 out.go:179] * Using Docker driver with root privileges
	I1029 09:38:10.627502  213005 cni.go:84] Creating CNI manager for ""
	I1029 09:38:10.627568  213005 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:38:10.627584  213005 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1029 09:38:10.627658  213005 start.go:353] cluster config:
	{Name:auto-937200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-937200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1029 09:38:10.630808  213005 out.go:179] * Starting "auto-937200" primary control-plane node in "auto-937200" cluster
	I1029 09:38:10.633540  213005 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:38:10.636483  213005 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:38:10.639287  213005 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:38:10.639349  213005 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1029 09:38:10.639365  213005 cache.go:59] Caching tarball of preloaded images
	I1029 09:38:10.639461  213005 preload.go:233] Found /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1029 09:38:10.639475  213005 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 09:38:10.639584  213005 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/auto-937200/config.json ...
	I1029 09:38:10.639607  213005 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/auto-937200/config.json: {Name:mk7c584e4e8c55161713afb4c487902298f3c230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:38:10.639761  213005 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:38:10.659071  213005 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:38:10.659095  213005 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:38:10.659108  213005 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:38:10.659130  213005 start.go:360] acquireMachinesLock for auto-937200: {Name:mkec7b2a2891f62f57d6e18ca01411864c1110ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:38:10.659240  213005 start.go:364] duration metric: took 90.585µs to acquireMachinesLock for "auto-937200"
	I1029 09:38:10.659270  213005 start.go:93] Provisioning new machine with config: &{Name:auto-937200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-937200 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:38:10.659344  213005 start.go:125] createHost starting for "" (driver="docker")
	I1029 09:38:10.662635  213005 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1029 09:38:10.662850  213005 start.go:159] libmachine.API.Create for "auto-937200" (driver="docker")
	I1029 09:38:10.662894  213005 client.go:173] LocalClient.Create starting
	I1029 09:38:10.662961  213005 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem
	I1029 09:38:10.662998  213005 main.go:143] libmachine: Decoding PEM data...
	I1029 09:38:10.663016  213005 main.go:143] libmachine: Parsing certificate...
	I1029 09:38:10.663072  213005 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem
	I1029 09:38:10.663099  213005 main.go:143] libmachine: Decoding PEM data...
	I1029 09:38:10.663113  213005 main.go:143] libmachine: Parsing certificate...
	I1029 09:38:10.663481  213005 cli_runner.go:164] Run: docker network inspect auto-937200 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1029 09:38:10.679856  213005 cli_runner.go:211] docker network inspect auto-937200 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1029 09:38:10.679943  213005 network_create.go:284] running [docker network inspect auto-937200] to gather additional debugging logs...
	I1029 09:38:10.679964  213005 cli_runner.go:164] Run: docker network inspect auto-937200
	W1029 09:38:10.695086  213005 cli_runner.go:211] docker network inspect auto-937200 returned with exit code 1
	I1029 09:38:10.695119  213005 network_create.go:287] error running [docker network inspect auto-937200]: docker network inspect auto-937200: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-937200 not found
	I1029 09:38:10.695133  213005 network_create.go:289] output of [docker network inspect auto-937200]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-937200 not found
	
	** /stderr **
	I1029 09:38:10.695250  213005 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:38:10.713151  213005 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0687088684ea IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8e:e2:78:39:db:9c} reservation:<nil>}
	I1029 09:38:10.713481  213005 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b2a2304196dd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8e:c9:a9:e0:d0:7a} reservation:<nil>}
	I1029 09:38:10.713876  213005 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e863a0178057 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:86:09:fc:5e:55} reservation:<nil>}
	I1029 09:38:10.714163  213005 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c3acff3dac19 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:76:94:16:18:e5:62} reservation:<nil>}
	I1029 09:38:10.714547  213005 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019fb1b0}
	I1029 09:38:10.714572  213005 network_create.go:124] attempt to create docker network auto-937200 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1029 09:38:10.714630  213005 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-937200 auto-937200
	I1029 09:38:10.777441  213005 network_create.go:108] docker network auto-937200 192.168.85.0/24 created
	I1029 09:38:10.777471  213005 kic.go:121] calculated static IP "192.168.85.2" for the "auto-937200" container
	I1029 09:38:10.777544  213005 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1029 09:38:10.794156  213005 cli_runner.go:164] Run: docker volume create auto-937200 --label name.minikube.sigs.k8s.io=auto-937200 --label created_by.minikube.sigs.k8s.io=true
	I1029 09:38:10.814448  213005 oci.go:103] Successfully created a docker volume auto-937200
	I1029 09:38:10.814550  213005 cli_runner.go:164] Run: docker run --rm --name auto-937200-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-937200 --entrypoint /usr/bin/test -v auto-937200:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1029 09:38:11.353448  213005 oci.go:107] Successfully prepared a docker volume auto-937200
	I1029 09:38:11.353502  213005 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:38:11.353521  213005 kic.go:194] Starting extracting preloaded images to volume ...
	I1029 09:38:11.353594  213005 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-937200:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Oct 29 09:38:05 default-k8s-diff-port-154565 crio[839]: time="2025-10-29T09:38:05.641019552Z" level=info msg="Created container db3ec66a357c5a351fadf953925450afa0e13392b95ae78da06930133a92f2e7: kube-system/coredns-66bc5c9577-hbn59/coredns" id=e68dd8ca-9879-44bb-8cdd-6be175a52c12 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:38:05 default-k8s-diff-port-154565 crio[839]: time="2025-10-29T09:38:05.642090389Z" level=info msg="Starting container: db3ec66a357c5a351fadf953925450afa0e13392b95ae78da06930133a92f2e7" id=aaf0d30f-5a61-464c-aeff-7d54eebb7493 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:38:05 default-k8s-diff-port-154565 crio[839]: time="2025-10-29T09:38:05.645828334Z" level=info msg="Started container" PID=1736 containerID=db3ec66a357c5a351fadf953925450afa0e13392b95ae78da06930133a92f2e7 description=kube-system/coredns-66bc5c9577-hbn59/coredns id=aaf0d30f-5a61-464c-aeff-7d54eebb7493 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6fcf15969d630e00c3ce1e9a77c8cb1ce21d05892f590b21202b3be1ac2ea147
	Oct 29 09:38:09 default-k8s-diff-port-154565 crio[839]: time="2025-10-29T09:38:09.414913635Z" level=info msg="Running pod sandbox: default/busybox/POD" id=ed2aaf0c-e3f4-445b-9036-8aa19938b9ad name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:38:09 default-k8s-diff-port-154565 crio[839]: time="2025-10-29T09:38:09.414982534Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:38:09 default-k8s-diff-port-154565 crio[839]: time="2025-10-29T09:38:09.428959054Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:380f41b414fb7af67f14126526ec776c54f4f6b164c75a558333de6f5b3b786f UID:324380b9-ea13-4bfc-97d9-f38c6b34fd12 NetNS:/var/run/netns/91b5a5be-7793-4cd2-b5d3-ba310ee1bf37 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400049e178}] Aliases:map[]}"
	Oct 29 09:38:09 default-k8s-diff-port-154565 crio[839]: time="2025-10-29T09:38:09.429133283Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 29 09:38:09 default-k8s-diff-port-154565 crio[839]: time="2025-10-29T09:38:09.43737714Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:380f41b414fb7af67f14126526ec776c54f4f6b164c75a558333de6f5b3b786f UID:324380b9-ea13-4bfc-97d9-f38c6b34fd12 NetNS:/var/run/netns/91b5a5be-7793-4cd2-b5d3-ba310ee1bf37 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400049e178}] Aliases:map[]}"
	Oct 29 09:38:09 default-k8s-diff-port-154565 crio[839]: time="2025-10-29T09:38:09.437516981Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 29 09:38:09 default-k8s-diff-port-154565 crio[839]: time="2025-10-29T09:38:09.440008401Z" level=info msg="Ran pod sandbox 380f41b414fb7af67f14126526ec776c54f4f6b164c75a558333de6f5b3b786f with infra container: default/busybox/POD" id=ed2aaf0c-e3f4-445b-9036-8aa19938b9ad name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:38:09 default-k8s-diff-port-154565 crio[839]: time="2025-10-29T09:38:09.443369161Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ff0f28e6-d09f-4472-983e-79857f5cff04 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:38:09 default-k8s-diff-port-154565 crio[839]: time="2025-10-29T09:38:09.44352547Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=ff0f28e6-d09f-4472-983e-79857f5cff04 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:38:09 default-k8s-diff-port-154565 crio[839]: time="2025-10-29T09:38:09.443584293Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=ff0f28e6-d09f-4472-983e-79857f5cff04 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:38:09 default-k8s-diff-port-154565 crio[839]: time="2025-10-29T09:38:09.446617978Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3f6aa111-0493-4a96-adf9-e0e4da9fa7a7 name=/runtime.v1.ImageService/PullImage
	Oct 29 09:38:09 default-k8s-diff-port-154565 crio[839]: time="2025-10-29T09:38:09.448784381Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 29 09:38:11 default-k8s-diff-port-154565 crio[839]: time="2025-10-29T09:38:11.702650249Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=3f6aa111-0493-4a96-adf9-e0e4da9fa7a7 name=/runtime.v1.ImageService/PullImage
	Oct 29 09:38:11 default-k8s-diff-port-154565 crio[839]: time="2025-10-29T09:38:11.703301373Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6e0f38cb-286e-41f5-8291-ee6d888edef1 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:38:11 default-k8s-diff-port-154565 crio[839]: time="2025-10-29T09:38:11.705669558Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f40094c5-707b-4f78-b6f4-75351f28513f name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:38:11 default-k8s-diff-port-154565 crio[839]: time="2025-10-29T09:38:11.712959823Z" level=info msg="Creating container: default/busybox/busybox" id=2abeca90-d63e-4298-b527-101606ec86d3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:38:11 default-k8s-diff-port-154565 crio[839]: time="2025-10-29T09:38:11.713089556Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:38:11 default-k8s-diff-port-154565 crio[839]: time="2025-10-29T09:38:11.717793631Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:38:11 default-k8s-diff-port-154565 crio[839]: time="2025-10-29T09:38:11.718255026Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:38:11 default-k8s-diff-port-154565 crio[839]: time="2025-10-29T09:38:11.734576322Z" level=info msg="Created container 8d63800d7b015aa22904d08c60a20b24a566d71b41659484e9c84e21ba0c81d7: default/busybox/busybox" id=2abeca90-d63e-4298-b527-101606ec86d3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:38:11 default-k8s-diff-port-154565 crio[839]: time="2025-10-29T09:38:11.738630233Z" level=info msg="Starting container: 8d63800d7b015aa22904d08c60a20b24a566d71b41659484e9c84e21ba0c81d7" id=6a857132-567d-45c6-9be3-529e3a2f95e7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:38:11 default-k8s-diff-port-154565 crio[839]: time="2025-10-29T09:38:11.741291984Z" level=info msg="Started container" PID=1792 containerID=8d63800d7b015aa22904d08c60a20b24a566d71b41659484e9c84e21ba0c81d7 description=default/busybox/busybox id=6a857132-567d-45c6-9be3-529e3a2f95e7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=380f41b414fb7af67f14126526ec776c54f4f6b164c75a558333de6f5b3b786f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	8d63800d7b015       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   380f41b414fb7       busybox                                                default
	db3ec66a357c5       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   6fcf15969d630       coredns-66bc5c9577-hbn59                               kube-system
	c8bda00e6cf61       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   09c7fc1e3a6d5       storage-provisioner                                    kube-system
	06448861e0e1b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      55 seconds ago       Running             kindnet-cni               0                   5b9aa289e1f7d       kindnet-btswn                                          kube-system
	ff404e4ac60cc       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      55 seconds ago       Running             kube-proxy                0                   7d42716a63192       kube-proxy-vxlb9                                       kube-system
	17839fa891cce       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   2f49e37979206       kube-scheduler-default-k8s-diff-port-154565            kube-system
	33f4cf0d77495       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   720fc3e154fff       kube-controller-manager-default-k8s-diff-port-154565   kube-system
	472d2b03cd1be       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   7cbaafd71cc63       kube-apiserver-default-k8s-diff-port-154565            kube-system
	0df6c461daf49       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   5b19e1928f24e       etcd-default-k8s-diff-port-154565                      kube-system
	
	
	==> coredns [db3ec66a357c5a351fadf953925450afa0e13392b95ae78da06930133a92f2e7] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35516 - 34468 "HINFO IN 5279709366156815635.3499644951355860833. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025132362s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-154565
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-154565
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=default-k8s-diff-port-154565
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_37_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:37:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-154565
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:38:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:38:18 +0000   Wed, 29 Oct 2025 09:37:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:38:18 +0000   Wed, 29 Oct 2025 09:37:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:38:18 +0000   Wed, 29 Oct 2025 09:37:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 09:38:18 +0000   Wed, 29 Oct 2025 09:38:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-154565
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                78efc080-8619-433f-9174-c9ba8af774f1
	  Boot ID:                    dcb5a4aa-84b0-4edf-aeb7-a96ccf6c0882
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-hbn59                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-default-k8s-diff-port-154565                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         62s
	  kube-system                 kindnet-btswn                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-default-k8s-diff-port-154565             250m (12%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-154565    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-vxlb9                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-default-k8s-diff-port-154565             100m (5%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 54s   kube-proxy       
	  Normal   Starting                 62s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s   kubelet          Node default-k8s-diff-port-154565 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s   kubelet          Node default-k8s-diff-port-154565 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s   kubelet          Node default-k8s-diff-port-154565 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s   node-controller  Node default-k8s-diff-port-154565 event: Registered Node default-k8s-diff-port-154565 in Controller
	  Normal   NodeReady                15s   kubelet          Node default-k8s-diff-port-154565 status is now: NodeReady
	
	
	==> dmesg <==
	[  +4.070732] overlayfs: idmapped layers are currently not supported
	[Oct29 09:11] overlayfs: idmapped layers are currently not supported
	[ +18.424492] overlayfs: idmapped layers are currently not supported
	[  +4.342269] hrtimer: interrupt took 2289025 ns
	[Oct29 09:12] overlayfs: idmapped layers are currently not supported
	[Oct29 09:13] overlayfs: idmapped layers are currently not supported
	[Oct29 09:14] overlayfs: idmapped layers are currently not supported
	[Oct29 09:20] overlayfs: idmapped layers are currently not supported
	[Oct29 09:23] overlayfs: idmapped layers are currently not supported
	[Oct29 09:24] overlayfs: idmapped layers are currently not supported
	[ +30.917844] overlayfs: idmapped layers are currently not supported
	[Oct29 09:27] overlayfs: idmapped layers are currently not supported
	[Oct29 09:29] overlayfs: idmapped layers are currently not supported
	[Oct29 09:30] overlayfs: idmapped layers are currently not supported
	[  +5.608805] overlayfs: idmapped layers are currently not supported
	[ +37.422429] overlayfs: idmapped layers are currently not supported
	[Oct29 09:31] overlayfs: idmapped layers are currently not supported
	[Oct29 09:32] overlayfs: idmapped layers are currently not supported
	[Oct29 09:34] overlayfs: idmapped layers are currently not supported
	[ +22.728709] overlayfs: idmapped layers are currently not supported
	[Oct29 09:35] overlayfs: idmapped layers are currently not supported
	[ +21.902387] overlayfs: idmapped layers are currently not supported
	[Oct29 09:37] overlayfs: idmapped layers are currently not supported
	[ +19.842209] overlayfs: idmapped layers are currently not supported
	[ +25.062735] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0df6c461daf4973108e0930492359007285b8479b5b65e7cc9e31bc2da4664c9] <==
	{"level":"warn","ts":"2025-10-29T09:37:11.785506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:11.837247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:11.840302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:11.868817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:11.937244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:11.957082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:11.984481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:12.005863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:12.028627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:12.065237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:12.105369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:12.172455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:12.208669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:12.238934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:12.278931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:12.296854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:12.333359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:12.356158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:12.382502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:12.413088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:12.450569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:12.485236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:12.536883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:12.568847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:37:12.692418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34254","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:38:19 up  1:20,  0 user,  load average: 3.52, 3.80, 3.00
	Linux default-k8s-diff-port-154565 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [06448861e0e1b9789638b718f16322802731bb4d05243a37315cf6d876787f32] <==
	I1029 09:37:24.158507       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:37:24.246728       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1029 09:37:24.246875       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:37:24.246888       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:37:24.246902       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:37:24Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:37:24.446183       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:37:24.446202       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:37:24.446211       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:37:24.446494       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1029 09:37:54.446330       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1029 09:37:54.446595       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1029 09:37:54.446617       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1029 09:37:54.446746       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1029 09:37:55.947394       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 09:37:55.947441       1 metrics.go:72] Registering metrics
	I1029 09:37:55.947498       1 controller.go:711] "Syncing nftables rules"
	I1029 09:38:04.453273       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1029 09:38:04.453323       1 main.go:301] handling current node
	I1029 09:38:14.446363       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1029 09:38:14.446420       1 main.go:301] handling current node
	
	
	==> kube-apiserver [472d2b03cd1be8897a3540699985d769e02a16eab36c0464fe812a201a9ab96f] <==
	I1029 09:37:14.406135       1 controller.go:667] quota admission added evaluator for: namespaces
	E1029 09:37:14.420719       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E1029 09:37:14.423184       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1029 09:37:14.510561       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1029 09:37:14.523951       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:37:14.543151       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:37:14.548744       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1029 09:37:14.635148       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1029 09:37:14.736966       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1029 09:37:14.761719       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1029 09:37:14.761804       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:37:15.830790       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:37:15.894961       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 09:37:16.021487       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1029 09:37:16.036074       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1029 09:37:16.037942       1 controller.go:667] quota admission added evaluator for: endpoints
	I1029 09:37:16.044477       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1029 09:37:16.958619       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 09:37:17.176844       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1029 09:37:17.249926       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1029 09:37:17.317372       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1029 09:37:22.873431       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:37:22.942233       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:37:22.984055       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1029 09:37:23.094969       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [33f4cf0d77495498d641c8113a03dba2ec572b02f357ea227fd5a7366e282aea] <==
	I1029 09:37:22.058116       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1029 09:37:22.059648       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1029 09:37:22.069870       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1029 09:37:22.072102       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1029 09:37:22.074403       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1029 09:37:22.074486       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1029 09:37:22.074535       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1029 09:37:22.080786       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1029 09:37:22.083985       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1029 09:37:22.086213       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1029 09:37:22.094063       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:37:22.097323       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1029 09:37:22.097399       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1029 09:37:22.097622       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1029 09:37:22.097665       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1029 09:37:22.100164       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1029 09:37:22.101399       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1029 09:37:22.102496       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1029 09:37:22.105329       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1029 09:37:22.111826       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1029 09:37:22.111906       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1029 09:37:22.112050       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-154565"
	I1029 09:37:22.112097       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1029 09:37:22.141505       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:38:07.120281       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ff404e4ac60ccf94a74d7922b7878024cddf94b450f3594021462145094d564d] <==
	I1029 09:37:24.281596       1 server_linux.go:53] "Using iptables proxy"
	I1029 09:37:24.398195       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 09:37:24.502941       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 09:37:24.503123       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1029 09:37:24.503257       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 09:37:24.753421       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:37:24.753482       1 server_linux.go:132] "Using iptables Proxier"
	I1029 09:37:24.772898       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 09:37:24.773275       1 server.go:527] "Version info" version="v1.34.1"
	I1029 09:37:24.773506       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:37:24.775790       1 config.go:200] "Starting service config controller"
	I1029 09:37:24.778520       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 09:37:24.779853       1 config.go:309] "Starting node config controller"
	I1029 09:37:24.781600       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 09:37:24.781676       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 09:37:24.777007       1 config.go:106] "Starting endpoint slice config controller"
	I1029 09:37:24.786379       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 09:37:24.777038       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 09:37:24.786411       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 09:37:24.879898       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 09:37:24.887179       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 09:37:24.887282       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [17839fa891cce8eb9fc625054077c697e0854d71d4b0e1cd6e11bd64eef2175a] <==
	I1029 09:37:12.285456       1 serving.go:386] Generated self-signed cert in-memory
	W1029 09:37:15.608020       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1029 09:37:15.608057       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1029 09:37:15.608069       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1029 09:37:15.608076       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1029 09:37:15.638711       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1029 09:37:15.638747       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:37:15.647345       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1029 09:37:15.647501       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:37:15.655307       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:37:15.647514       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1029 09:37:15.674708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1029 09:37:16.856610       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 09:37:18 default-k8s-diff-port-154565 kubelet[1315]: I1029 09:37:18.387781    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-154565" podStartSLOduration=1.3877625949999999 podStartE2EDuration="1.387762595s" podCreationTimestamp="2025-10-29 09:37:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:37:18.335345396 +0000 UTC m=+1.314378121" watchObservedRunningTime="2025-10-29 09:37:18.387762595 +0000 UTC m=+1.366795328"
	Oct 29 09:37:22 default-k8s-diff-port-154565 kubelet[1315]: I1029 09:37:22.043114    1315 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 29 09:37:22 default-k8s-diff-port-154565 kubelet[1315]: I1029 09:37:22.048970    1315 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 29 09:37:23 default-k8s-diff-port-154565 kubelet[1315]: I1029 09:37:23.348032    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/46793add-1a42-48cd-835c-69d4f9a1bf7d-kube-proxy\") pod \"kube-proxy-vxlb9\" (UID: \"46793add-1a42-48cd-835c-69d4f9a1bf7d\") " pod="kube-system/kube-proxy-vxlb9"
	Oct 29 09:37:23 default-k8s-diff-port-154565 kubelet[1315]: I1029 09:37:23.348090    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46793add-1a42-48cd-835c-69d4f9a1bf7d-lib-modules\") pod \"kube-proxy-vxlb9\" (UID: \"46793add-1a42-48cd-835c-69d4f9a1bf7d\") " pod="kube-system/kube-proxy-vxlb9"
	Oct 29 09:37:23 default-k8s-diff-port-154565 kubelet[1315]: I1029 09:37:23.348116    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46793add-1a42-48cd-835c-69d4f9a1bf7d-xtables-lock\") pod \"kube-proxy-vxlb9\" (UID: \"46793add-1a42-48cd-835c-69d4f9a1bf7d\") " pod="kube-system/kube-proxy-vxlb9"
	Oct 29 09:37:23 default-k8s-diff-port-154565 kubelet[1315]: I1029 09:37:23.348136    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7lfh\" (UniqueName: \"kubernetes.io/projected/46793add-1a42-48cd-835c-69d4f9a1bf7d-kube-api-access-n7lfh\") pod \"kube-proxy-vxlb9\" (UID: \"46793add-1a42-48cd-835c-69d4f9a1bf7d\") " pod="kube-system/kube-proxy-vxlb9"
	Oct 29 09:37:23 default-k8s-diff-port-154565 kubelet[1315]: I1029 09:37:23.450373    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a7737b1f-9d42-4a7d-8bd7-84911d52c5f9-cni-cfg\") pod \"kindnet-btswn\" (UID: \"a7737b1f-9d42-4a7d-8bd7-84911d52c5f9\") " pod="kube-system/kindnet-btswn"
	Oct 29 09:37:23 default-k8s-diff-port-154565 kubelet[1315]: I1029 09:37:23.450419    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7737b1f-9d42-4a7d-8bd7-84911d52c5f9-lib-modules\") pod \"kindnet-btswn\" (UID: \"a7737b1f-9d42-4a7d-8bd7-84911d52c5f9\") " pod="kube-system/kindnet-btswn"
	Oct 29 09:37:23 default-k8s-diff-port-154565 kubelet[1315]: I1029 09:37:23.450451    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7737b1f-9d42-4a7d-8bd7-84911d52c5f9-xtables-lock\") pod \"kindnet-btswn\" (UID: \"a7737b1f-9d42-4a7d-8bd7-84911d52c5f9\") " pod="kube-system/kindnet-btswn"
	Oct 29 09:37:23 default-k8s-diff-port-154565 kubelet[1315]: I1029 09:37:23.450469    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22nn9\" (UniqueName: \"kubernetes.io/projected/a7737b1f-9d42-4a7d-8bd7-84911d52c5f9-kube-api-access-22nn9\") pod \"kindnet-btswn\" (UID: \"a7737b1f-9d42-4a7d-8bd7-84911d52c5f9\") " pod="kube-system/kindnet-btswn"
	Oct 29 09:37:23 default-k8s-diff-port-154565 kubelet[1315]: I1029 09:37:23.571764    1315 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 29 09:37:23 default-k8s-diff-port-154565 kubelet[1315]: W1029 09:37:23.859339    1315 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/dfc2c419fe4814caa3f5a0bc7a2ac4b24be2798cd8aa60ffeba23c8cc25c3683/crio-7d42716a631921c308cb9628782a75c42e7e80d1f98d5d9a1b227281386f5b0d WatchSource:0}: Error finding container 7d42716a631921c308cb9628782a75c42e7e80d1f98d5d9a1b227281386f5b0d: Status 404 returned error can't find the container with id 7d42716a631921c308cb9628782a75c42e7e80d1f98d5d9a1b227281386f5b0d
	Oct 29 09:37:23 default-k8s-diff-port-154565 kubelet[1315]: W1029 09:37:23.929593    1315 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/dfc2c419fe4814caa3f5a0bc7a2ac4b24be2798cd8aa60ffeba23c8cc25c3683/crio-5b9aa289e1f7d50d95c4157db95f85fc3a06abc75c8253e22b41c1d8bc2439aa WatchSource:0}: Error finding container 5b9aa289e1f7d50d95c4157db95f85fc3a06abc75c8253e22b41c1d8bc2439aa: Status 404 returned error can't find the container with id 5b9aa289e1f7d50d95c4157db95f85fc3a06abc75c8253e22b41c1d8bc2439aa
	Oct 29 09:37:24 default-k8s-diff-port-154565 kubelet[1315]: I1029 09:37:24.551920    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-btswn" podStartSLOduration=1.551899835 podStartE2EDuration="1.551899835s" podCreationTimestamp="2025-10-29 09:37:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:37:24.523785672 +0000 UTC m=+7.502818405" watchObservedRunningTime="2025-10-29 09:37:24.551899835 +0000 UTC m=+7.530932560"
	Oct 29 09:37:25 default-k8s-diff-port-154565 kubelet[1315]: I1029 09:37:25.941365    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vxlb9" podStartSLOduration=2.941348436 podStartE2EDuration="2.941348436s" podCreationTimestamp="2025-10-29 09:37:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:37:24.553724581 +0000 UTC m=+7.532757315" watchObservedRunningTime="2025-10-29 09:37:25.941348436 +0000 UTC m=+8.920381153"
	Oct 29 09:38:04 default-k8s-diff-port-154565 kubelet[1315]: I1029 09:38:04.861030    1315 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 29 09:38:05 default-k8s-diff-port-154565 kubelet[1315]: I1029 09:38:05.096211    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3716ce63-bbfd-489a-a382-9c6d5dc40925-tmp\") pod \"storage-provisioner\" (UID: \"3716ce63-bbfd-489a-a382-9c6d5dc40925\") " pod="kube-system/storage-provisioner"
	Oct 29 09:38:05 default-k8s-diff-port-154565 kubelet[1315]: I1029 09:38:05.096451    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlssl\" (UniqueName: \"kubernetes.io/projected/3716ce63-bbfd-489a-a382-9c6d5dc40925-kube-api-access-vlssl\") pod \"storage-provisioner\" (UID: \"3716ce63-bbfd-489a-a382-9c6d5dc40925\") " pod="kube-system/storage-provisioner"
	Oct 29 09:38:05 default-k8s-diff-port-154565 kubelet[1315]: I1029 09:38:05.096571    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/571dd534-5c05-4ea1-b2fa-292f307b4037-config-volume\") pod \"coredns-66bc5c9577-hbn59\" (UID: \"571dd534-5c05-4ea1-b2fa-292f307b4037\") " pod="kube-system/coredns-66bc5c9577-hbn59"
	Oct 29 09:38:05 default-k8s-diff-port-154565 kubelet[1315]: I1029 09:38:05.096676    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9tzx\" (UniqueName: \"kubernetes.io/projected/571dd534-5c05-4ea1-b2fa-292f307b4037-kube-api-access-k9tzx\") pod \"coredns-66bc5c9577-hbn59\" (UID: \"571dd534-5c05-4ea1-b2fa-292f307b4037\") " pod="kube-system/coredns-66bc5c9577-hbn59"
	Oct 29 09:38:05 default-k8s-diff-port-154565 kubelet[1315]: W1029 09:38:05.567007    1315 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/dfc2c419fe4814caa3f5a0bc7a2ac4b24be2798cd8aa60ffeba23c8cc25c3683/crio-6fcf15969d630e00c3ce1e9a77c8cb1ce21d05892f590b21202b3be1ac2ea147 WatchSource:0}: Error finding container 6fcf15969d630e00c3ce1e9a77c8cb1ce21d05892f590b21202b3be1ac2ea147: Status 404 returned error can't find the container with id 6fcf15969d630e00c3ce1e9a77c8cb1ce21d05892f590b21202b3be1ac2ea147
	Oct 29 09:38:06 default-k8s-diff-port-154565 kubelet[1315]: I1029 09:38:06.646505    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.646487527 podStartE2EDuration="42.646487527s" podCreationTimestamp="2025-10-29 09:37:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:38:06.626981316 +0000 UTC m=+49.606014074" watchObservedRunningTime="2025-10-29 09:38:06.646487527 +0000 UTC m=+49.625520260"
	Oct 29 09:38:09 default-k8s-diff-port-154565 kubelet[1315]: I1029 09:38:09.101635    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-hbn59" podStartSLOduration=46.101613776 podStartE2EDuration="46.101613776s" podCreationTimestamp="2025-10-29 09:37:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:38:06.648583784 +0000 UTC m=+49.627616517" watchObservedRunningTime="2025-10-29 09:38:09.101613776 +0000 UTC m=+52.080646501"
	Oct 29 09:38:09 default-k8s-diff-port-154565 kubelet[1315]: I1029 09:38:09.140042    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxlpp\" (UniqueName: \"kubernetes.io/projected/324380b9-ea13-4bfc-97d9-f38c6b34fd12-kube-api-access-gxlpp\") pod \"busybox\" (UID: \"324380b9-ea13-4bfc-97d9-f38c6b34fd12\") " pod="default/busybox"
	
	
	==> storage-provisioner [c8bda00e6cf61ee1e29a1f632854a6003464e99a1a1bee3bc85be1a0981192a8] <==
	I1029 09:38:05.664442       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1029 09:38:05.691460       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1029 09:38:05.691593       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1029 09:38:05.701843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:38:05.728857       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:38:05.771359       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1029 09:38:05.771637       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-154565_4dc3c97a-e23a-48ca-8f6a-d6de261e40f0!
	I1029 09:38:05.775549       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"eb2a2ad0-3fcc-4033-a090-3abddb1b193f", APIVersion:"v1", ResourceVersion:"426", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-154565_4dc3c97a-e23a-48ca-8f6a-d6de261e40f0 became leader
	W1029 09:38:05.775798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:38:05.789444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:38:05.874643       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-154565_4dc3c97a-e23a-48ca-8f6a-d6de261e40f0!
	W1029 09:38:07.792180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:38:07.797435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:38:09.800888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:38:09.807247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:38:11.811116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:38:11.816452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:38:13.819365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:38:13.825885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:38:15.829574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:38:15.839738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:38:17.842941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:38:17.847640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:38:19.850927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:38:19.855743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-154565 -n default-k8s-diff-port-154565
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-154565 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-154565 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-154565 --alsologtostderr -v=1: exit status 80 (2.015497194s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-154565 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 09:39:42.486038  218706 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:39:42.486270  218706 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:39:42.486300  218706 out.go:374] Setting ErrFile to fd 2...
	I1029 09:39:42.486320  218706 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:39:42.486677  218706 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 09:39:42.487097  218706 out.go:368] Setting JSON to false
	I1029 09:39:42.487170  218706 mustload.go:66] Loading cluster: default-k8s-diff-port-154565
	I1029 09:39:42.487711  218706 config.go:182] Loaded profile config "default-k8s-diff-port-154565": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:39:42.488358  218706 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-154565 --format={{.State.Status}}
	I1029 09:39:42.509199  218706 host.go:66] Checking if "default-k8s-diff-port-154565" exists ...
	I1029 09:39:42.509519  218706 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:39:42.619382  218706 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-29 09:39:42.607751 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:39:42.620019  218706 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-154565 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1029 09:39:42.623716  218706 out.go:179] * Pausing node default-k8s-diff-port-154565 ... 
	I1029 09:39:42.628827  218706 host.go:66] Checking if "default-k8s-diff-port-154565" exists ...
	I1029 09:39:42.629165  218706 ssh_runner.go:195] Run: systemctl --version
	I1029 09:39:42.629205  218706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:39:42.657585  218706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/default-k8s-diff-port-154565/id_rsa Username:docker}
	I1029 09:39:42.772392  218706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:39:42.794004  218706 pause.go:52] kubelet running: true
	I1029 09:39:42.794078  218706 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:39:43.084681  218706 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:39:43.084757  218706 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:39:43.224907  218706 cri.go:89] found id: "0c86ad951f717e434b3bc0751b40d09aee480039cdbb2d71d3b5aba02ca39db8"
	I1029 09:39:43.224926  218706 cri.go:89] found id: "c46b79795aaad08becba49a7b200667b944eb335b0b342474d42e8439a790a5d"
	I1029 09:39:43.224931  218706 cri.go:89] found id: "996dd46a13bd9c4fbc716e270a5ee2bfd1b8ca9b3678e68b888aa222415a9866"
	I1029 09:39:43.224934  218706 cri.go:89] found id: "40419a34f22d499b5e10f2817ca3190043cf4654975faa221907811657572319"
	I1029 09:39:43.224937  218706 cri.go:89] found id: "76deef5dfbe8964470407b18cf7e6c413662b0b3a9ea20f0b1ebd6bb5b990471"
	I1029 09:39:43.224941  218706 cri.go:89] found id: "4ecc87c3c4efebb87e8579fe30d41b373305c1560267c5e5c1c7e4f651d75911"
	I1029 09:39:43.224943  218706 cri.go:89] found id: "fac10df47d1f3807c7e226078bc5907e12ab5e525c2712d52627272075aad944"
	I1029 09:39:43.224946  218706 cri.go:89] found id: "921026fa87ee220227613d52ff56bc6b3408a4d844d6176f9493e6f447ed8e33"
	I1029 09:39:43.224950  218706 cri.go:89] found id: "2735bfa1503d05a45f458d45439f5d361379ddf5a1c72b94147b431a43b261c5"
	I1029 09:39:43.224962  218706 cri.go:89] found id: "def5f21481b3d0e59948f1372921bdc8212525290f254d357bd5e810b72206c5"
	I1029 09:39:43.224966  218706 cri.go:89] found id: "47c8964204a91d0b46d5e4ff09a253ddec6adc122582f93a5497e300ab1bf5ea"
	I1029 09:39:43.224969  218706 cri.go:89] found id: ""
	I1029 09:39:43.225045  218706 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:39:43.248092  218706 retry.go:31] will retry after 186.426364ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:39:43Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:39:43.435476  218706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:39:43.452638  218706 pause.go:52] kubelet running: false
	I1029 09:39:43.452725  218706 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:39:43.684966  218706 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:39:43.685051  218706 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:39:43.759789  218706 cri.go:89] found id: "0c86ad951f717e434b3bc0751b40d09aee480039cdbb2d71d3b5aba02ca39db8"
	I1029 09:39:43.759808  218706 cri.go:89] found id: "c46b79795aaad08becba49a7b200667b944eb335b0b342474d42e8439a790a5d"
	I1029 09:39:43.759813  218706 cri.go:89] found id: "996dd46a13bd9c4fbc716e270a5ee2bfd1b8ca9b3678e68b888aa222415a9866"
	I1029 09:39:43.759817  218706 cri.go:89] found id: "40419a34f22d499b5e10f2817ca3190043cf4654975faa221907811657572319"
	I1029 09:39:43.759820  218706 cri.go:89] found id: "76deef5dfbe8964470407b18cf7e6c413662b0b3a9ea20f0b1ebd6bb5b990471"
	I1029 09:39:43.759823  218706 cri.go:89] found id: "4ecc87c3c4efebb87e8579fe30d41b373305c1560267c5e5c1c7e4f651d75911"
	I1029 09:39:43.759826  218706 cri.go:89] found id: "fac10df47d1f3807c7e226078bc5907e12ab5e525c2712d52627272075aad944"
	I1029 09:39:43.759841  218706 cri.go:89] found id: "921026fa87ee220227613d52ff56bc6b3408a4d844d6176f9493e6f447ed8e33"
	I1029 09:39:43.759844  218706 cri.go:89] found id: "2735bfa1503d05a45f458d45439f5d361379ddf5a1c72b94147b431a43b261c5"
	I1029 09:39:43.759851  218706 cri.go:89] found id: "def5f21481b3d0e59948f1372921bdc8212525290f254d357bd5e810b72206c5"
	I1029 09:39:43.759858  218706 cri.go:89] found id: "47c8964204a91d0b46d5e4ff09a253ddec6adc122582f93a5497e300ab1bf5ea"
	I1029 09:39:43.759862  218706 cri.go:89] found id: ""
	I1029 09:39:43.759934  218706 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:39:43.771936  218706 retry.go:31] will retry after 360.35477ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:39:43Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:39:44.133550  218706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:39:44.147240  218706 pause.go:52] kubelet running: false
	I1029 09:39:44.147327  218706 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:39:44.319208  218706 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:39:44.319289  218706 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:39:44.393076  218706 cri.go:89] found id: "0c86ad951f717e434b3bc0751b40d09aee480039cdbb2d71d3b5aba02ca39db8"
	I1029 09:39:44.393147  218706 cri.go:89] found id: "c46b79795aaad08becba49a7b200667b944eb335b0b342474d42e8439a790a5d"
	I1029 09:39:44.393160  218706 cri.go:89] found id: "996dd46a13bd9c4fbc716e270a5ee2bfd1b8ca9b3678e68b888aa222415a9866"
	I1029 09:39:44.393165  218706 cri.go:89] found id: "40419a34f22d499b5e10f2817ca3190043cf4654975faa221907811657572319"
	I1029 09:39:44.393168  218706 cri.go:89] found id: "76deef5dfbe8964470407b18cf7e6c413662b0b3a9ea20f0b1ebd6bb5b990471"
	I1029 09:39:44.393172  218706 cri.go:89] found id: "4ecc87c3c4efebb87e8579fe30d41b373305c1560267c5e5c1c7e4f651d75911"
	I1029 09:39:44.393175  218706 cri.go:89] found id: "fac10df47d1f3807c7e226078bc5907e12ab5e525c2712d52627272075aad944"
	I1029 09:39:44.393178  218706 cri.go:89] found id: "921026fa87ee220227613d52ff56bc6b3408a4d844d6176f9493e6f447ed8e33"
	I1029 09:39:44.393181  218706 cri.go:89] found id: "2735bfa1503d05a45f458d45439f5d361379ddf5a1c72b94147b431a43b261c5"
	I1029 09:39:44.393189  218706 cri.go:89] found id: "def5f21481b3d0e59948f1372921bdc8212525290f254d357bd5e810b72206c5"
	I1029 09:39:44.393193  218706 cri.go:89] found id: "47c8964204a91d0b46d5e4ff09a253ddec6adc122582f93a5497e300ab1bf5ea"
	I1029 09:39:44.393196  218706 cri.go:89] found id: ""
	I1029 09:39:44.393246  218706 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:39:44.407747  218706 out.go:203] 
	W1029 09:39:44.410770  218706 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:39:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:39:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 09:39:44.410794  218706 out.go:285] * 
	* 
	W1029 09:39:44.416256  218706 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 09:39:44.421249  218706 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-154565 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-154565
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-154565:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dfc2c419fe4814caa3f5a0bc7a2ac4b24be2798cd8aa60ffeba23c8cc25c3683",
	        "Created": "2025-10-29T09:36:47.880643174Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 215785,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:38:33.507378256Z",
	            "FinishedAt": "2025-10-29T09:38:32.461497778Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/dfc2c419fe4814caa3f5a0bc7a2ac4b24be2798cd8aa60ffeba23c8cc25c3683/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dfc2c419fe4814caa3f5a0bc7a2ac4b24be2798cd8aa60ffeba23c8cc25c3683/hostname",
	        "HostsPath": "/var/lib/docker/containers/dfc2c419fe4814caa3f5a0bc7a2ac4b24be2798cd8aa60ffeba23c8cc25c3683/hosts",
	        "LogPath": "/var/lib/docker/containers/dfc2c419fe4814caa3f5a0bc7a2ac4b24be2798cd8aa60ffeba23c8cc25c3683/dfc2c419fe4814caa3f5a0bc7a2ac4b24be2798cd8aa60ffeba23c8cc25c3683-json.log",
	        "Name": "/default-k8s-diff-port-154565",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-154565:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-154565",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dfc2c419fe4814caa3f5a0bc7a2ac4b24be2798cd8aa60ffeba23c8cc25c3683",
	                "LowerDir": "/var/lib/docker/overlay2/9ba4e32c5a57a2a0e65d7ce595e96b480a301690f5c728e704090e910736b869-init/diff:/var/lib/docker/overlay2/512c003c31c6c889d7180677d0a7d04f9641d651bb7c0f829b95ca7e47c2836c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9ba4e32c5a57a2a0e65d7ce595e96b480a301690f5c728e704090e910736b869/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9ba4e32c5a57a2a0e65d7ce595e96b480a301690f5c728e704090e910736b869/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9ba4e32c5a57a2a0e65d7ce595e96b480a301690f5c728e704090e910736b869/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-154565",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-154565/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-154565",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-154565",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-154565",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "179e0304474060f405a3c52d398d589dd009fd7a533a53bc11bbcde9ddcc8032",
	            "SandboxKey": "/var/run/docker/netns/179e03044740",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-154565": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:b9:94:ac:ea:c9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c3acff3dac19998e01d626c0b1e4f259c12319017d7e423e1cda5eea55f18a36",
	                    "EndpointID": "b1495ab6fb0a93308e73d603f9a4c31895427f281e940bc3577649d2c16229c1",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-154565",
	                        "dfc2c419fe48"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-154565 -n default-k8s-diff-port-154565
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-154565 -n default-k8s-diff-port-154565: exit status 2 (386.753336ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-154565 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-154565 logs -n 25: (1.447223646s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p no-preload-505993                                                                                                                                                                                                                          │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ delete  │ -p no-preload-505993                                                                                                                                                                                                                          │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ delete  │ -p disable-driver-mounts-012564                                                                                                                                                                                                               │ disable-driver-mounts-012564 │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ start   │ -p default-k8s-diff-port-154565 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-154565 │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:38 UTC │
	│ image   │ embed-certs-946178 image list --format=json                                                                                                                                                                                                   │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ pause   │ -p embed-certs-946178 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │                     │
	│ delete  │ -p embed-certs-946178                                                                                                                                                                                                                         │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:37 UTC │
	│ delete  │ -p embed-certs-946178                                                                                                                                                                                                                         │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:37 UTC │ 29 Oct 25 09:37 UTC │
	│ start   │ -p newest-cni-194729 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:37 UTC │ 29 Oct 25 09:37 UTC │
	│ addons  │ enable metrics-server -p newest-cni-194729 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:37 UTC │                     │
	│ stop    │ -p newest-cni-194729 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:37 UTC │ 29 Oct 25 09:37 UTC │
	│ addons  │ enable dashboard -p newest-cni-194729 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:37 UTC │ 29 Oct 25 09:37 UTC │
	│ start   │ -p newest-cni-194729 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:37 UTC │ 29 Oct 25 09:38 UTC │
	│ image   │ newest-cni-194729 image list --format=json                                                                                                                                                                                                    │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:38 UTC │ 29 Oct 25 09:38 UTC │
	│ pause   │ -p newest-cni-194729 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:38 UTC │                     │
	│ delete  │ -p newest-cni-194729                                                                                                                                                                                                                          │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:38 UTC │ 29 Oct 25 09:38 UTC │
	│ delete  │ -p newest-cni-194729                                                                                                                                                                                                                          │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:38 UTC │ 29 Oct 25 09:38 UTC │
	│ start   │ -p auto-937200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-937200                  │ jenkins │ v1.37.0 │ 29 Oct 25 09:38 UTC │ 29 Oct 25 09:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-154565 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-154565 │ jenkins │ v1.37.0 │ 29 Oct 25 09:38 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-154565 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-154565 │ jenkins │ v1.37.0 │ 29 Oct 25 09:38 UTC │ 29 Oct 25 09:38 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-154565 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-154565 │ jenkins │ v1.37.0 │ 29 Oct 25 09:38 UTC │ 29 Oct 25 09:38 UTC │
	│ start   │ -p default-k8s-diff-port-154565 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-154565 │ jenkins │ v1.37.0 │ 29 Oct 25 09:38 UTC │ 29 Oct 25 09:39 UTC │
	│ ssh     │ -p auto-937200 pgrep -a kubelet                                                                                                                                                                                                               │ auto-937200                  │ jenkins │ v1.37.0 │ 29 Oct 25 09:39 UTC │ 29 Oct 25 09:39 UTC │
	│ image   │ default-k8s-diff-port-154565 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-154565 │ jenkins │ v1.37.0 │ 29 Oct 25 09:39 UTC │ 29 Oct 25 09:39 UTC │
	│ pause   │ -p default-k8s-diff-port-154565 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-154565 │ jenkins │ v1.37.0 │ 29 Oct 25 09:39 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:38:33
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:38:33.099808  215661 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:38:33.100410  215661 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:38:33.100467  215661 out.go:374] Setting ErrFile to fd 2...
	I1029 09:38:33.100486  215661 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:38:33.100778  215661 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 09:38:33.101194  215661 out.go:368] Setting JSON to false
	I1029 09:38:33.102112  215661 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4865,"bootTime":1761725848,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1029 09:38:33.102209  215661 start.go:143] virtualization:  
	I1029 09:38:33.107184  215661 out.go:179] * [default-k8s-diff-port-154565] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1029 09:38:33.110319  215661 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:38:33.110386  215661 notify.go:221] Checking for updates...
	I1029 09:38:33.116156  215661 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:38:33.119006  215661 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:38:33.121985  215661 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	I1029 09:38:33.124803  215661 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1029 09:38:33.127744  215661 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:38:33.131015  215661 config.go:182] Loaded profile config "default-k8s-diff-port-154565": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:38:33.131636  215661 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:38:33.172463  215661 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1029 09:38:33.172606  215661 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:38:33.290075  215661 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-29 09:38:33.277358778 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:38:33.290215  215661 docker.go:319] overlay module found
	I1029 09:38:33.293210  215661 out.go:179] * Using the docker driver based on existing profile
	I1029 09:38:33.296028  215661 start.go:309] selected driver: docker
	I1029 09:38:33.296043  215661 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-154565 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-154565 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:38:33.296144  215661 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:38:33.296878  215661 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:38:33.396111  215661 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-29 09:38:33.381526548 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:38:33.396538  215661 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:38:33.396575  215661 cni.go:84] Creating CNI manager for ""
	I1029 09:38:33.396626  215661 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:38:33.396663  215661 start.go:353] cluster config:
	{Name:default-k8s-diff-port-154565 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-154565 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:38:33.399680  215661 out.go:179] * Starting "default-k8s-diff-port-154565" primary control-plane node in "default-k8s-diff-port-154565" cluster
	I1029 09:38:33.402480  215661 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:38:33.405370  215661 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:38:33.408122  215661 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:38:33.408188  215661 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1029 09:38:33.408202  215661 cache.go:59] Caching tarball of preloaded images
	I1029 09:38:33.408288  215661 preload.go:233] Found /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1029 09:38:33.408303  215661 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 09:38:33.408372  215661 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:38:33.408684  215661 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/config.json ...
	I1029 09:38:33.441170  215661 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:38:33.441189  215661 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:38:33.441203  215661 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:38:33.441224  215661 start.go:360] acquireMachinesLock for default-k8s-diff-port-154565: {Name:mk949f3a944b6d0d5624c677fdcfbf59ea2f05b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:38:33.441294  215661 start.go:364] duration metric: took 45.334µs to acquireMachinesLock for "default-k8s-diff-port-154565"
	I1029 09:38:33.441313  215661 start.go:96] Skipping create...Using existing machine configuration
	I1029 09:38:33.441318  215661 fix.go:54] fixHost starting: 
	I1029 09:38:33.441579  215661 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-154565 --format={{.State.Status}}
	I1029 09:38:33.462053  215661 fix.go:112] recreateIfNeeded on default-k8s-diff-port-154565: state=Stopped err=<nil>
	W1029 09:38:33.462083  215661 fix.go:138] unexpected machine state, will restart: <nil>
	I1029 09:38:30.686547  213005 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1029 09:38:31.315023  213005 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1029 09:38:31.315657  213005 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1029 09:38:32.230104  213005 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1029 09:38:32.584571  213005 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1029 09:38:32.841451  213005 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1029 09:38:34.026135  213005 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1029 09:38:35.344476  213005 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1029 09:38:35.344577  213005 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1029 09:38:35.353421  213005 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1029 09:38:35.357171  213005 out.go:252]   - Booting up control plane ...
	I1029 09:38:35.357294  213005 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1029 09:38:35.363390  213005 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1029 09:38:35.363519  213005 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1029 09:38:35.381411  213005 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1029 09:38:35.381529  213005 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1029 09:38:35.388709  213005 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1029 09:38:35.388999  213005 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1029 09:38:35.389186  213005 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1029 09:38:33.465500  215661 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-154565" ...
	I1029 09:38:33.465581  215661 cli_runner.go:164] Run: docker start default-k8s-diff-port-154565
	I1029 09:38:33.806970  215661 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-154565 --format={{.State.Status}}
	I1029 09:38:33.830149  215661 kic.go:430] container "default-k8s-diff-port-154565" state is running.
	I1029 09:38:33.830888  215661 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-154565
	I1029 09:38:33.854870  215661 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/config.json ...
	I1029 09:38:33.855103  215661 machine.go:94] provisionDockerMachine start ...
	I1029 09:38:33.855158  215661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:38:33.886589  215661 main.go:143] libmachine: Using SSH client type: native
	I1029 09:38:33.886913  215661 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1029 09:38:33.886922  215661 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 09:38:33.887632  215661 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42996->127.0.0.1:33093: read: connection reset by peer
	I1029 09:38:37.044662  215661 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-154565
	
	I1029 09:38:37.044699  215661 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-154565"
	I1029 09:38:37.044806  215661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:38:37.067918  215661 main.go:143] libmachine: Using SSH client type: native
	I1029 09:38:37.068227  215661 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1029 09:38:37.068245  215661 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-154565 && echo "default-k8s-diff-port-154565" | sudo tee /etc/hostname
	I1029 09:38:37.235076  215661 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-154565
	
	I1029 09:38:37.235205  215661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:38:37.258378  215661 main.go:143] libmachine: Using SSH client type: native
	I1029 09:38:37.258691  215661 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1029 09:38:37.258711  215661 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-154565' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-154565/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-154565' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 09:38:37.427782  215661 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 09:38:37.427862  215661 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-2763/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-2763/.minikube}
	I1029 09:38:37.427921  215661 ubuntu.go:190] setting up certificates
	I1029 09:38:37.427966  215661 provision.go:84] configureAuth start
	I1029 09:38:37.428051  215661 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-154565
	I1029 09:38:37.457347  215661 provision.go:143] copyHostCerts
	I1029 09:38:37.457459  215661 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem, removing ...
	I1029 09:38:37.457475  215661 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 09:38:37.457552  215661 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem (1082 bytes)
	I1029 09:38:37.457657  215661 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem, removing ...
	I1029 09:38:37.457662  215661 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 09:38:37.457687  215661 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem (1123 bytes)
	I1029 09:38:37.457745  215661 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem, removing ...
	I1029 09:38:37.457749  215661 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 09:38:37.457773  215661 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem (1679 bytes)
	I1029 09:38:37.457825  215661 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-154565 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-154565 localhost minikube]
	I1029 09:38:35.564761  213005 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1029 09:38:35.564894  213005 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1029 09:38:38.068083  213005 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.502064165s
	I1029 09:38:38.070090  213005 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1029 09:38:38.070449  213005 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1029 09:38:38.070817  213005 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1029 09:38:38.072085  213005 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1029 09:38:38.854299  215661 provision.go:177] copyRemoteCerts
	I1029 09:38:38.854396  215661 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 09:38:38.854465  215661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:38:38.872181  215661 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/default-k8s-diff-port-154565/id_rsa Username:docker}
	I1029 09:38:38.998013  215661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 09:38:39.037116  215661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1029 09:38:39.082200  215661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 09:38:39.122523  215661 provision.go:87] duration metric: took 1.694518382s to configureAuth
	I1029 09:38:39.122611  215661 ubuntu.go:206] setting minikube options for container-runtime
	I1029 09:38:39.122862  215661 config.go:182] Loaded profile config "default-k8s-diff-port-154565": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:38:39.123040  215661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:38:39.167553  215661 main.go:143] libmachine: Using SSH client type: native
	I1029 09:38:39.167912  215661 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1029 09:38:39.167926  215661 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 09:38:39.673548  215661 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 09:38:39.673572  215661 machine.go:97] duration metric: took 5.81845959s to provisionDockerMachine
	I1029 09:38:39.673582  215661 start.go:293] postStartSetup for "default-k8s-diff-port-154565" (driver="docker")
	I1029 09:38:39.673593  215661 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 09:38:39.673722  215661 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 09:38:39.673782  215661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:38:39.705688  215661 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/default-k8s-diff-port-154565/id_rsa Username:docker}
	I1029 09:38:39.835879  215661 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 09:38:39.845399  215661 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 09:38:39.845430  215661 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 09:38:39.845449  215661 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/addons for local assets ...
	I1029 09:38:39.845515  215661 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/files for local assets ...
	I1029 09:38:39.845608  215661 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> 45502.pem in /etc/ssl/certs
	I1029 09:38:39.845719  215661 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 09:38:39.862855  215661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /etc/ssl/certs/45502.pem (1708 bytes)
	I1029 09:38:39.901228  215661 start.go:296] duration metric: took 227.62951ms for postStartSetup
	I1029 09:38:39.901402  215661 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 09:38:39.901503  215661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:38:39.934934  215661 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/default-k8s-diff-port-154565/id_rsa Username:docker}
	I1029 09:38:40.050493  215661 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 09:38:40.056090  215661 fix.go:56] duration metric: took 6.614763856s for fixHost
	I1029 09:38:40.056111  215661 start.go:83] releasing machines lock for "default-k8s-diff-port-154565", held for 6.61480919s
	I1029 09:38:40.056180  215661 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-154565
	I1029 09:38:40.094135  215661 ssh_runner.go:195] Run: cat /version.json
	I1029 09:38:40.094186  215661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:38:40.094209  215661 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 09:38:40.094278  215661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:38:40.133876  215661 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/default-k8s-diff-port-154565/id_rsa Username:docker}
	I1029 09:38:40.135082  215661 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/default-k8s-diff-port-154565/id_rsa Username:docker}
	I1029 09:38:40.347499  215661 ssh_runner.go:195] Run: systemctl --version
	I1029 09:38:40.357034  215661 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 09:38:40.449556  215661 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 09:38:40.460864  215661 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 09:38:40.460966  215661 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 09:38:40.481089  215661 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1029 09:38:40.481113  215661 start.go:496] detecting cgroup driver to use...
	I1029 09:38:40.481175  215661 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1029 09:38:40.481263  215661 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 09:38:40.505647  215661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 09:38:40.526178  215661 docker.go:218] disabling cri-docker service (if available) ...
	I1029 09:38:40.526271  215661 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 09:38:40.553513  215661 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 09:38:40.572757  215661 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 09:38:40.762667  215661 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 09:38:40.969449  215661 docker.go:234] disabling docker service ...
	I1029 09:38:40.969545  215661 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 09:38:41.004697  215661 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 09:38:41.024892  215661 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 09:38:41.195454  215661 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 09:38:41.389826  215661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 09:38:41.416036  215661 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 09:38:41.444163  215661 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 09:38:41.444345  215661 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:38:41.460122  215661 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1029 09:38:41.460248  215661 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:38:41.471086  215661 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:38:41.482832  215661 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:38:41.494056  215661 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 09:38:41.510349  215661 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:38:41.533564  215661 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:38:41.542552  215661 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:38:41.552238  215661 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 09:38:41.560934  215661 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 09:38:41.569276  215661 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:38:41.793469  215661 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 09:38:42.017985  215661 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 09:38:42.018124  215661 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 09:38:42.025040  215661 start.go:564] Will wait 60s for crictl version
	I1029 09:38:42.025163  215661 ssh_runner.go:195] Run: which crictl
	I1029 09:38:42.030024  215661 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 09:38:42.067560  215661 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 09:38:42.067717  215661 ssh_runner.go:195] Run: crio --version
	I1029 09:38:42.138136  215661 ssh_runner.go:195] Run: crio --version
	I1029 09:38:42.195579  215661 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 09:38:42.198633  215661 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-154565 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:38:42.226154  215661 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1029 09:38:42.231162  215661 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:38:42.246674  215661 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-154565 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-154565 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 09:38:42.246794  215661 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:38:42.246856  215661 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:38:42.327359  215661 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:38:42.327385  215661 crio.go:433] Images already preloaded, skipping extraction
	I1029 09:38:42.327471  215661 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:38:42.378716  215661 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:38:42.378735  215661 cache_images.go:86] Images are preloaded, skipping loading
	I1029 09:38:42.378743  215661 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1029 09:38:42.378849  215661 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-154565 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-154565 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 09:38:42.378926  215661 ssh_runner.go:195] Run: crio config
	I1029 09:38:42.473129  215661 cni.go:84] Creating CNI manager for ""
	I1029 09:38:42.473151  215661 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:38:42.473200  215661 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1029 09:38:42.473233  215661 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-154565 NodeName:default-k8s-diff-port-154565 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 09:38:42.473409  215661 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-154565"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 09:38:42.473493  215661 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 09:38:42.483443  215661 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 09:38:42.483548  215661 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 09:38:42.497665  215661 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1029 09:38:42.520827  215661 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 09:38:42.537672  215661 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1029 09:38:42.565724  215661 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1029 09:38:42.569963  215661 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:38:42.581526  215661 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:38:42.784820  215661 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:38:42.813917  215661 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565 for IP: 192.168.76.2
	I1029 09:38:42.813989  215661 certs.go:195] generating shared ca certs ...
	I1029 09:38:42.814022  215661 certs.go:227] acquiring lock for ca certs: {Name:mk715611293338c39436fe9072c5ce0c2fb993a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:38:42.814234  215661 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key
	I1029 09:38:42.814321  215661 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key
	I1029 09:38:42.814345  215661 certs.go:257] generating profile certs ...
	I1029 09:38:42.814482  215661 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/client.key
	I1029 09:38:42.814591  215661 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/apiserver.key.f827afaa
	I1029 09:38:42.814673  215661 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/proxy-client.key
	I1029 09:38:42.814848  215661 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem (1338 bytes)
	W1029 09:38:42.814917  215661 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550_empty.pem, impossibly tiny 0 bytes
	I1029 09:38:42.814943  215661 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem (1679 bytes)
	I1029 09:38:42.814999  215661 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem (1082 bytes)
	I1029 09:38:42.815066  215661 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem (1123 bytes)
	I1029 09:38:42.815111  215661 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem (1679 bytes)
	I1029 09:38:42.815212  215661 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem (1708 bytes)
	I1029 09:38:42.816016  215661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 09:38:42.875334  215661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 09:38:42.921448  215661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 09:38:42.961408  215661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1029 09:38:42.992042  215661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1029 09:38:43.053821  215661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1029 09:38:43.099321  215661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 09:38:43.144803  215661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1029 09:38:43.211557  215661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem --> /usr/share/ca-certificates/4550.pem (1338 bytes)
	I1029 09:38:43.250637  215661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /usr/share/ca-certificates/45502.pem (1708 bytes)
	I1029 09:38:43.300589  215661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 09:38:43.328729  215661 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 09:38:43.350562  215661 ssh_runner.go:195] Run: openssl version
	I1029 09:38:43.357868  215661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/45502.pem && ln -fs /usr/share/ca-certificates/45502.pem /etc/ssl/certs/45502.pem"
	I1029 09:38:43.370979  215661 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/45502.pem
	I1029 09:38:43.375837  215661 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:27 /usr/share/ca-certificates/45502.pem
	I1029 09:38:43.375929  215661 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/45502.pem
	I1029 09:38:43.425892  215661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/45502.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 09:38:43.434151  215661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 09:38:43.443840  215661 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:38:43.452263  215661 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:38:43.452393  215661 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:38:43.495428  215661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 09:38:43.503755  215661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4550.pem && ln -fs /usr/share/ca-certificates/4550.pem /etc/ssl/certs/4550.pem"
	I1029 09:38:43.512088  215661 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4550.pem
	I1029 09:38:43.516415  215661 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:27 /usr/share/ca-certificates/4550.pem
	I1029 09:38:43.516507  215661 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4550.pem
	I1029 09:38:43.566754  215661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4550.pem /etc/ssl/certs/51391683.0"
	I1029 09:38:43.575012  215661 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 09:38:43.579531  215661 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1029 09:38:43.623633  215661 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1029 09:38:43.666777  215661 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1029 09:38:43.756694  215661 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1029 09:38:43.837423  215661 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1029 09:38:43.909945  215661 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1029 09:38:43.997177  215661 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-154565 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-154565 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:38:43.997265  215661 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 09:38:43.997394  215661 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 09:38:44.117996  215661 cri.go:89] found id: "fac10df47d1f3807c7e226078bc5907e12ab5e525c2712d52627272075aad944"
	I1029 09:38:44.118020  215661 cri.go:89] found id: "2735bfa1503d05a45f458d45439f5d361379ddf5a1c72b94147b431a43b261c5"
	I1029 09:38:44.118025  215661 cri.go:89] found id: ""
	I1029 09:38:44.118110  215661 ssh_runner.go:195] Run: sudo runc list -f json
	W1029 09:38:44.183141  215661 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:38:44Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:38:44.183263  215661 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 09:38:44.211267  215661 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1029 09:38:44.211287  215661 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1029 09:38:44.211365  215661 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1029 09:38:44.242255  215661 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1029 09:38:44.242795  215661 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-154565" does not appear in /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:38:44.242950  215661 kubeconfig.go:62] /home/jenkins/minikube-integration/21800-2763/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-154565" cluster setting kubeconfig missing "default-k8s-diff-port-154565" context setting]
	I1029 09:38:44.243303  215661 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:38:44.245102  215661 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1029 09:38:44.260892  215661 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1029 09:38:44.260971  215661 kubeadm.go:602] duration metric: took 49.678367ms to restartPrimaryControlPlane
	I1029 09:38:44.260994  215661 kubeadm.go:403] duration metric: took 263.825425ms to StartCluster
	I1029 09:38:44.261033  215661 settings.go:142] acquiring lock: {Name:mkeba1c0d8e3656c2a05b6b6f1f81184498df216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:38:44.261126  215661 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:38:44.261820  215661 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:38:44.262078  215661 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:38:44.262455  215661 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 09:38:44.262525  215661 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-154565"
	I1029 09:38:44.262538  215661 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-154565"
	W1029 09:38:44.262544  215661 addons.go:248] addon storage-provisioner should already be in state true
	I1029 09:38:44.262564  215661 host.go:66] Checking if "default-k8s-diff-port-154565" exists ...
	I1029 09:38:44.263204  215661 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-154565 --format={{.State.Status}}
	I1029 09:38:44.263544  215661 config.go:182] Loaded profile config "default-k8s-diff-port-154565": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:38:44.263666  215661 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-154565"
	I1029 09:38:44.263735  215661 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-154565"
	I1029 09:38:44.264053  215661 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-154565 --format={{.State.Status}}
	I1029 09:38:44.264229  215661 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-154565"
	I1029 09:38:44.264273  215661 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-154565"
	W1029 09:38:44.264293  215661 addons.go:248] addon dashboard should already be in state true
	I1029 09:38:44.264343  215661 host.go:66] Checking if "default-k8s-diff-port-154565" exists ...
	I1029 09:38:44.265229  215661 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-154565 --format={{.State.Status}}
	I1029 09:38:44.267125  215661 out.go:179] * Verifying Kubernetes components...
	I1029 09:38:44.276417  215661 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:38:44.301896  215661 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 09:38:44.305103  215661 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:38:44.305126  215661 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1029 09:38:44.305191  215661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:38:44.336532  215661 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1029 09:38:44.340975  215661 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-154565"
	W1029 09:38:44.340997  215661 addons.go:248] addon default-storageclass should already be in state true
	I1029 09:38:44.341022  215661 host.go:66] Checking if "default-k8s-diff-port-154565" exists ...
	I1029 09:38:44.341524  215661 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-154565 --format={{.State.Status}}
	I1029 09:38:44.349377  215661 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/default-k8s-diff-port-154565/id_rsa Username:docker}
	I1029 09:38:44.350358  215661 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1029 09:38:43.269606  213005 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.196213385s
	I1029 09:38:44.353350  215661 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1029 09:38:44.353373  215661 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1029 09:38:44.353440  215661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:38:44.380507  215661 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1029 09:38:44.380529  215661 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1029 09:38:44.380592  215661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:38:44.421766  215661 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/default-k8s-diff-port-154565/id_rsa Username:docker}
	I1029 09:38:44.432538  215661 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/default-k8s-diff-port-154565/id_rsa Username:docker}
	I1029 09:38:44.781343  215661 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:38:44.829855  215661 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1029 09:38:44.832508  215661 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:38:44.885934  215661 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-154565" to be "Ready" ...
	I1029 09:38:44.892362  215661 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1029 09:38:44.892445  215661 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1029 09:38:45.046304  215661 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1029 09:38:45.046388  215661 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1029 09:38:45.171590  215661 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1029 09:38:45.171676  215661 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1029 09:38:45.350916  215661 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1029 09:38:45.350991  215661 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1029 09:38:45.397653  215661 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1029 09:38:45.397731  215661 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1029 09:38:45.446337  215661 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1029 09:38:45.446399  215661 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1029 09:38:45.477763  215661 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1029 09:38:45.477834  215661 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1029 09:38:45.503761  215661 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1029 09:38:45.503832  215661 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1029 09:38:45.545896  215661 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1029 09:38:45.545972  215661 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1029 09:38:45.606196  215661 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1029 09:38:47.347710  213005 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 9.273932906s
	I1029 09:38:48.073715  213005 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.002413045s
	I1029 09:38:48.094839  213005 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1029 09:38:48.111386  213005 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1029 09:38:48.127105  213005 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1029 09:38:48.127538  213005 kubeadm.go:319] [mark-control-plane] Marking the node auto-937200 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1029 09:38:48.140654  213005 kubeadm.go:319] [bootstrap-token] Using token: zwnbgz.6ylmjp2fugmqq52x
	I1029 09:38:48.143605  213005 out.go:252]   - Configuring RBAC rules ...
	I1029 09:38:48.143734  213005 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1029 09:38:48.148909  213005 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1029 09:38:48.163793  213005 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1029 09:38:48.170567  213005 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1029 09:38:48.175396  213005 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1029 09:38:48.182016  213005 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1029 09:38:48.480673  213005 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1029 09:38:48.977507  213005 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1029 09:38:49.495224  213005 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1029 09:38:49.497154  213005 kubeadm.go:319] 
	I1029 09:38:49.497241  213005 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1029 09:38:49.497253  213005 kubeadm.go:319] 
	I1029 09:38:49.497336  213005 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1029 09:38:49.497345  213005 kubeadm.go:319] 
	I1029 09:38:49.497372  213005 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1029 09:38:49.497439  213005 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1029 09:38:49.497500  213005 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1029 09:38:49.497509  213005 kubeadm.go:319] 
	I1029 09:38:49.497565  213005 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1029 09:38:49.497574  213005 kubeadm.go:319] 
	I1029 09:38:49.497624  213005 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1029 09:38:49.497648  213005 kubeadm.go:319] 
	I1029 09:38:49.497707  213005 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1029 09:38:49.497789  213005 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1029 09:38:49.497864  213005 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1029 09:38:49.497873  213005 kubeadm.go:319] 
	I1029 09:38:49.497961  213005 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1029 09:38:49.498045  213005 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1029 09:38:49.498054  213005 kubeadm.go:319] 
	I1029 09:38:49.498143  213005 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token zwnbgz.6ylmjp2fugmqq52x \
	I1029 09:38:49.498255  213005 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da4a5b90580f0f492e24f667f5676cec258425f736b389045aee440db981859e \
	I1029 09:38:49.498279  213005 kubeadm.go:319] 	--control-plane 
	I1029 09:38:49.498289  213005 kubeadm.go:319] 
	I1029 09:38:49.498378  213005 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1029 09:38:49.498387  213005 kubeadm.go:319] 
	I1029 09:38:49.498473  213005 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token zwnbgz.6ylmjp2fugmqq52x \
	I1029 09:38:49.498766  213005 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da4a5b90580f0f492e24f667f5676cec258425f736b389045aee440db981859e 
	I1029 09:38:49.503801  213005 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1029 09:38:49.504042  213005 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1029 09:38:49.504155  213005 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1029 09:38:49.504175  213005 cni.go:84] Creating CNI manager for ""
	I1029 09:38:49.504186  213005 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:38:49.509813  213005 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1029 09:38:49.512708  213005 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1029 09:38:49.517797  213005 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1029 09:38:49.517815  213005 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1029 09:38:49.542495  213005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1029 09:38:50.233032  213005 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1029 09:38:50.233156  213005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:38:50.233219  213005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-937200 minikube.k8s.io/updated_at=2025_10_29T09_38_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac minikube.k8s.io/name=auto-937200 minikube.k8s.io/primary=true
	I1029 09:38:51.484381  215661 node_ready.go:49] node "default-k8s-diff-port-154565" is "Ready"
	I1029 09:38:51.484408  215661 node_ready.go:38] duration metric: took 6.598404743s for node "default-k8s-diff-port-154565" to be "Ready" ...
	I1029 09:38:51.484423  215661 api_server.go:52] waiting for apiserver process to appear ...
	I1029 09:38:51.484483  215661 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:38:51.700601  215661 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.870664954s)
	I1029 09:38:53.219066  215661 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.386480547s)
	I1029 09:38:53.317215  215661 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.710937614s)
	I1029 09:38:53.317451  215661 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.832945572s)
	I1029 09:38:53.317486  215661 api_server.go:72] duration metric: took 9.055349624s to wait for apiserver process to appear ...
	I1029 09:38:53.317506  215661 api_server.go:88] waiting for apiserver healthz status ...
	I1029 09:38:53.317537  215661 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1029 09:38:53.320278  215661 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-154565 addons enable metrics-server
	
	I1029 09:38:53.323164  215661 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1029 09:38:50.753853  213005 ops.go:34] apiserver oom_adj: -16
	I1029 09:38:50.753973  213005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:38:51.254071  213005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:38:51.754695  213005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:38:52.255041  213005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:38:52.754350  213005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:38:53.254226  213005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:38:53.754422  213005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:38:54.254932  213005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:38:54.754882  213005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:38:54.982904  213005 kubeadm.go:1114] duration metric: took 4.749791789s to wait for elevateKubeSystemPrivileges
	I1029 09:38:54.982930  213005 kubeadm.go:403] duration metric: took 28.357253238s to StartCluster
	I1029 09:38:54.982946  213005 settings.go:142] acquiring lock: {Name:mkeba1c0d8e3656c2a05b6b6f1f81184498df216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:38:54.983007  213005 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:38:54.983941  213005 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:38:54.984142  213005 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:38:54.984299  213005 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1029 09:38:54.984567  213005 config.go:182] Loaded profile config "auto-937200": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:38:54.984599  213005 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 09:38:54.984657  213005 addons.go:70] Setting storage-provisioner=true in profile "auto-937200"
	I1029 09:38:54.984671  213005 addons.go:239] Setting addon storage-provisioner=true in "auto-937200"
	I1029 09:38:54.984692  213005 host.go:66] Checking if "auto-937200" exists ...
	I1029 09:38:54.985136  213005 addons.go:70] Setting default-storageclass=true in profile "auto-937200"
	I1029 09:38:54.985154  213005 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-937200"
	I1029 09:38:54.985564  213005 cli_runner.go:164] Run: docker container inspect auto-937200 --format={{.State.Status}}
	I1029 09:38:54.985942  213005 cli_runner.go:164] Run: docker container inspect auto-937200 --format={{.State.Status}}
	I1029 09:38:54.987646  213005 out.go:179] * Verifying Kubernetes components...
	I1029 09:38:54.997569  213005 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:38:55.024155  213005 addons.go:239] Setting addon default-storageclass=true in "auto-937200"
	I1029 09:38:55.024205  213005 host.go:66] Checking if "auto-937200" exists ...
	I1029 09:38:55.024667  213005 cli_runner.go:164] Run: docker container inspect auto-937200 --format={{.State.Status}}
	I1029 09:38:55.038974  213005 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 09:38:55.041965  213005 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:38:55.041992  213005 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1029 09:38:55.042062  213005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-937200
	I1029 09:38:55.072589  213005 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1029 09:38:55.072615  213005 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1029 09:38:55.072695  213005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-937200
	I1029 09:38:55.079218  213005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/auto-937200/id_rsa Username:docker}
	I1029 09:38:55.098506  213005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/auto-937200/id_rsa Username:docker}
	I1029 09:38:55.448946  213005 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:38:55.449192  213005 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1029 09:38:55.521203  213005 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1029 09:38:55.547208  213005 node_ready.go:35] waiting up to 15m0s for node "auto-937200" to be "Ready" ...
	I1029 09:38:55.613002  213005 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:38:56.168771  213005 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1029 09:38:56.514189  213005 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1029 09:38:53.326024  215661 addons.go:515] duration metric: took 9.063557568s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1029 09:38:53.331234  215661 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1029 09:38:53.333714  215661 api_server.go:141] control plane version: v1.34.1
	I1029 09:38:53.333777  215661 api_server.go:131] duration metric: took 16.250946ms to wait for apiserver health ...
	I1029 09:38:53.333805  215661 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 09:38:53.340204  215661 system_pods.go:59] 8 kube-system pods found
	I1029 09:38:53.340280  215661 system_pods.go:61] "coredns-66bc5c9577-hbn59" [571dd534-5c05-4ea1-b2fa-292f307b4037] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:38:53.340344  215661 system_pods.go:61] "etcd-default-k8s-diff-port-154565" [53c9dae2-fca7-4051-b461-90cb4406bce2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:38:53.340373  215661 system_pods.go:61] "kindnet-btswn" [a7737b1f-9d42-4a7d-8bd7-84911d52c5f9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1029 09:38:53.340395  215661 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-154565" [2272867f-fac7-443c-9471-ca7f7627c890] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:38:53.340422  215661 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-154565" [2430e944-a50b-4c78-8361-998a66b1a633] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:38:53.340455  215661 system_pods.go:61] "kube-proxy-vxlb9" [46793add-1a42-48cd-835c-69d4f9a1bf7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1029 09:38:53.340487  215661 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-154565" [66c1d519-8710-43bd-b90d-5bc17357ddd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:38:53.340517  215661 system_pods.go:61] "storage-provisioner" [3716ce63-bbfd-489a-a382-9c6d5dc40925] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:38:53.340544  215661 system_pods.go:74] duration metric: took 6.718239ms to wait for pod list to return data ...
	I1029 09:38:53.340592  215661 default_sa.go:34] waiting for default service account to be created ...
	I1029 09:38:53.343572  215661 default_sa.go:45] found service account: "default"
	I1029 09:38:53.343639  215661 default_sa.go:55] duration metric: took 3.027006ms for default service account to be created ...
	I1029 09:38:53.343663  215661 system_pods.go:116] waiting for k8s-apps to be running ...
	I1029 09:38:53.347434  215661 system_pods.go:86] 8 kube-system pods found
	I1029 09:38:53.347513  215661 system_pods.go:89] "coredns-66bc5c9577-hbn59" [571dd534-5c05-4ea1-b2fa-292f307b4037] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:38:53.347537  215661 system_pods.go:89] "etcd-default-k8s-diff-port-154565" [53c9dae2-fca7-4051-b461-90cb4406bce2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:38:53.347577  215661 system_pods.go:89] "kindnet-btswn" [a7737b1f-9d42-4a7d-8bd7-84911d52c5f9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1029 09:38:53.347606  215661 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-154565" [2272867f-fac7-443c-9471-ca7f7627c890] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:38:53.347634  215661 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-154565" [2430e944-a50b-4c78-8361-998a66b1a633] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:38:53.347663  215661 system_pods.go:89] "kube-proxy-vxlb9" [46793add-1a42-48cd-835c-69d4f9a1bf7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1029 09:38:53.347700  215661 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-154565" [66c1d519-8710-43bd-b90d-5bc17357ddd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:38:53.347721  215661 system_pods.go:89] "storage-provisioner" [3716ce63-bbfd-489a-a382-9c6d5dc40925] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:38:53.347744  215661 system_pods.go:126] duration metric: took 4.062691ms to wait for k8s-apps to be running ...
	I1029 09:38:53.347778  215661 system_svc.go:44] waiting for kubelet service to be running ....
	I1029 09:38:53.347851  215661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:38:53.368274  215661 system_svc.go:56] duration metric: took 20.488687ms WaitForService to wait for kubelet
	I1029 09:38:53.368359  215661 kubeadm.go:587] duration metric: took 9.106220503s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:38:53.368395  215661 node_conditions.go:102] verifying NodePressure condition ...
	I1029 09:38:53.377200  215661 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1029 09:38:53.377281  215661 node_conditions.go:123] node cpu capacity is 2
	I1029 09:38:53.377308  215661 node_conditions.go:105] duration metric: took 8.891412ms to run NodePressure ...
	I1029 09:38:53.377357  215661 start.go:242] waiting for startup goroutines ...
	I1029 09:38:53.377385  215661 start.go:247] waiting for cluster config update ...
	I1029 09:38:53.377412  215661 start.go:256] writing updated cluster config ...
	I1029 09:38:53.377745  215661 ssh_runner.go:195] Run: rm -f paused
	I1029 09:38:53.382286  215661 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:38:53.387677  215661 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hbn59" in "kube-system" namespace to be "Ready" or be gone ...
	W1029 09:38:55.393783  215661 pod_ready.go:104] pod "coredns-66bc5c9577-hbn59" is not "Ready", error: <nil>
	W1029 09:38:57.394997  215661 pod_ready.go:104] pod "coredns-66bc5c9577-hbn59" is not "Ready", error: <nil>
	I1029 09:38:56.517455  213005 addons.go:515] duration metric: took 1.532819718s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1029 09:38:56.672821  213005 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-937200" context rescaled to 1 replicas
	W1029 09:38:57.551393  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	W1029 09:38:59.551639  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	W1029 09:38:59.892866  215661 pod_ready.go:104] pod "coredns-66bc5c9577-hbn59" is not "Ready", error: <nil>
	W1029 09:39:01.894823  215661 pod_ready.go:104] pod "coredns-66bc5c9577-hbn59" is not "Ready", error: <nil>
	W1029 09:39:02.051377  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	W1029 09:39:04.549856  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	W1029 09:39:04.400171  215661 pod_ready.go:104] pod "coredns-66bc5c9577-hbn59" is not "Ready", error: <nil>
	W1029 09:39:06.893682  215661 pod_ready.go:104] pod "coredns-66bc5c9577-hbn59" is not "Ready", error: <nil>
	W1029 09:39:06.550621  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	W1029 09:39:09.050839  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	W1029 09:39:09.394081  215661 pod_ready.go:104] pod "coredns-66bc5c9577-hbn59" is not "Ready", error: <nil>
	W1029 09:39:11.398469  215661 pod_ready.go:104] pod "coredns-66bc5c9577-hbn59" is not "Ready", error: <nil>
	W1029 09:39:11.550856  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	W1029 09:39:14.050822  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	W1029 09:39:13.892978  215661 pod_ready.go:104] pod "coredns-66bc5c9577-hbn59" is not "Ready", error: <nil>
	W1029 09:39:16.395595  215661 pod_ready.go:104] pod "coredns-66bc5c9577-hbn59" is not "Ready", error: <nil>
	W1029 09:39:16.550000  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	W1029 09:39:18.550825  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	W1029 09:39:18.396802  215661 pod_ready.go:104] pod "coredns-66bc5c9577-hbn59" is not "Ready", error: <nil>
	W1029 09:39:20.397735  215661 pod_ready.go:104] pod "coredns-66bc5c9577-hbn59" is not "Ready", error: <nil>
	W1029 09:39:22.895975  215661 pod_ready.go:104] pod "coredns-66bc5c9577-hbn59" is not "Ready", error: <nil>
	W1029 09:39:21.050787  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	W1029 09:39:23.050941  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	W1029 09:39:25.051011  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	W1029 09:39:25.396471  215661 pod_ready.go:104] pod "coredns-66bc5c9577-hbn59" is not "Ready", error: <nil>
	W1029 09:39:27.397886  215661 pod_ready.go:104] pod "coredns-66bc5c9577-hbn59" is not "Ready", error: <nil>
	I1029 09:39:28.893737  215661 pod_ready.go:94] pod "coredns-66bc5c9577-hbn59" is "Ready"
	I1029 09:39:28.893773  215661 pod_ready.go:86] duration metric: took 35.506030291s for pod "coredns-66bc5c9577-hbn59" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:28.896935  215661 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-154565" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:28.901644  215661 pod_ready.go:94] pod "etcd-default-k8s-diff-port-154565" is "Ready"
	I1029 09:39:28.901671  215661 pod_ready.go:86] duration metric: took 4.709786ms for pod "etcd-default-k8s-diff-port-154565" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:28.903798  215661 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-154565" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:28.908247  215661 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-154565" is "Ready"
	I1029 09:39:28.908354  215661 pod_ready.go:86] duration metric: took 4.530265ms for pod "kube-apiserver-default-k8s-diff-port-154565" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:28.910831  215661 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-154565" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:29.092255  215661 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-154565" is "Ready"
	I1029 09:39:29.092287  215661 pod_ready.go:86] duration metric: took 181.432296ms for pod "kube-controller-manager-default-k8s-diff-port-154565" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:29.292057  215661 pod_ready.go:83] waiting for pod "kube-proxy-vxlb9" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:29.692669  215661 pod_ready.go:94] pod "kube-proxy-vxlb9" is "Ready"
	I1029 09:39:29.692697  215661 pod_ready.go:86] duration metric: took 400.610431ms for pod "kube-proxy-vxlb9" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:29.891729  215661 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-154565" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:30.292263  215661 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-154565" is "Ready"
	I1029 09:39:30.292294  215661 pod_ready.go:86] duration metric: took 400.536699ms for pod "kube-scheduler-default-k8s-diff-port-154565" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:30.292334  215661 pod_ready.go:40] duration metric: took 36.90995243s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:39:30.349415  215661 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	W1029 09:39:27.550020  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	W1029 09:39:29.550269  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	I1029 09:39:30.437358  215661 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-154565" cluster and "default" namespace by default
	W1029 09:39:31.550884  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	W1029 09:39:34.050911  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	I1029 09:39:36.051725  213005 node_ready.go:49] node "auto-937200" is "Ready"
	I1029 09:39:36.051751  213005 node_ready.go:38] duration metric: took 40.504497163s for node "auto-937200" to be "Ready" ...
	I1029 09:39:36.051766  213005 api_server.go:52] waiting for apiserver process to appear ...
	I1029 09:39:36.051827  213005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:39:36.065292  213005 api_server.go:72] duration metric: took 41.081123074s to wait for apiserver process to appear ...
	I1029 09:39:36.065312  213005 api_server.go:88] waiting for apiserver healthz status ...
	I1029 09:39:36.065333  213005 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:39:36.077806  213005 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1029 09:39:36.079054  213005 api_server.go:141] control plane version: v1.34.1
	I1029 09:39:36.079076  213005 api_server.go:131] duration metric: took 13.756138ms to wait for apiserver health ...
	I1029 09:39:36.079085  213005 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 09:39:36.089027  213005 system_pods.go:59] 8 kube-system pods found
	I1029 09:39:36.089056  213005 system_pods.go:61] "coredns-66bc5c9577-tgrw8" [73ce956b-c6ca-426a-825e-51fe3f119917] Pending
	I1029 09:39:36.089062  213005 system_pods.go:61] "etcd-auto-937200" [5dc08754-f4f8-4cfb-8daa-0d39d7ebf2af] Running
	I1029 09:39:36.089067  213005 system_pods.go:61] "kindnet-qqhf5" [baa553de-c84a-4cb4-b629-b021eb75966c] Running
	I1029 09:39:36.089071  213005 system_pods.go:61] "kube-apiserver-auto-937200" [e6297baa-bb55-41f6-8578-b5f59566062c] Running
	I1029 09:39:36.089076  213005 system_pods.go:61] "kube-controller-manager-auto-937200" [05a9a22c-272a-4597-95af-b79a3cad70a1] Running
	I1029 09:39:36.089080  213005 system_pods.go:61] "kube-proxy-dmr48" [e634703f-3441-49c6-9f33-7fd37262f5a4] Running
	I1029 09:39:36.089084  213005 system_pods.go:61] "kube-scheduler-auto-937200" [82b3f3f0-2609-40fd-a95b-09c198a4555e] Running
	I1029 09:39:36.089089  213005 system_pods.go:61] "storage-provisioner" [d3e26104-f613-42a2-accf-2a549b3a8983] Pending
	I1029 09:39:36.089094  213005 system_pods.go:74] duration metric: took 10.004153ms to wait for pod list to return data ...
	I1029 09:39:36.089102  213005 default_sa.go:34] waiting for default service account to be created ...
	I1029 09:39:36.096826  213005 default_sa.go:45] found service account: "default"
	I1029 09:39:36.096849  213005 default_sa.go:55] duration metric: took 7.741182ms for default service account to be created ...
	I1029 09:39:36.096858  213005 system_pods.go:116] waiting for k8s-apps to be running ...
	I1029 09:39:36.111888  213005 system_pods.go:86] 8 kube-system pods found
	I1029 09:39:36.111924  213005 system_pods.go:89] "coredns-66bc5c9577-tgrw8" [73ce956b-c6ca-426a-825e-51fe3f119917] Pending
	I1029 09:39:36.111931  213005 system_pods.go:89] "etcd-auto-937200" [5dc08754-f4f8-4cfb-8daa-0d39d7ebf2af] Running
	I1029 09:39:36.111936  213005 system_pods.go:89] "kindnet-qqhf5" [baa553de-c84a-4cb4-b629-b021eb75966c] Running
	I1029 09:39:36.111941  213005 system_pods.go:89] "kube-apiserver-auto-937200" [e6297baa-bb55-41f6-8578-b5f59566062c] Running
	I1029 09:39:36.111945  213005 system_pods.go:89] "kube-controller-manager-auto-937200" [05a9a22c-272a-4597-95af-b79a3cad70a1] Running
	I1029 09:39:36.111949  213005 system_pods.go:89] "kube-proxy-dmr48" [e634703f-3441-49c6-9f33-7fd37262f5a4] Running
	I1029 09:39:36.111953  213005 system_pods.go:89] "kube-scheduler-auto-937200" [82b3f3f0-2609-40fd-a95b-09c198a4555e] Running
	I1029 09:39:36.111964  213005 system_pods.go:89] "storage-provisioner" [d3e26104-f613-42a2-accf-2a549b3a8983] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:39:36.112001  213005 retry.go:31] will retry after 230.447185ms: missing components: kube-dns
	I1029 09:39:36.346557  213005 system_pods.go:86] 8 kube-system pods found
	I1029 09:39:36.346597  213005 system_pods.go:89] "coredns-66bc5c9577-tgrw8" [73ce956b-c6ca-426a-825e-51fe3f119917] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:39:36.346605  213005 system_pods.go:89] "etcd-auto-937200" [5dc08754-f4f8-4cfb-8daa-0d39d7ebf2af] Running
	I1029 09:39:36.346611  213005 system_pods.go:89] "kindnet-qqhf5" [baa553de-c84a-4cb4-b629-b021eb75966c] Running
	I1029 09:39:36.346616  213005 system_pods.go:89] "kube-apiserver-auto-937200" [e6297baa-bb55-41f6-8578-b5f59566062c] Running
	I1029 09:39:36.346620  213005 system_pods.go:89] "kube-controller-manager-auto-937200" [05a9a22c-272a-4597-95af-b79a3cad70a1] Running
	I1029 09:39:36.346626  213005 system_pods.go:89] "kube-proxy-dmr48" [e634703f-3441-49c6-9f33-7fd37262f5a4] Running
	I1029 09:39:36.346630  213005 system_pods.go:89] "kube-scheduler-auto-937200" [82b3f3f0-2609-40fd-a95b-09c198a4555e] Running
	I1029 09:39:36.346654  213005 system_pods.go:89] "storage-provisioner" [d3e26104-f613-42a2-accf-2a549b3a8983] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:39:36.346677  213005 retry.go:31] will retry after 296.152942ms: missing components: kube-dns
	I1029 09:39:36.647666  213005 system_pods.go:86] 8 kube-system pods found
	I1029 09:39:36.647703  213005 system_pods.go:89] "coredns-66bc5c9577-tgrw8" [73ce956b-c6ca-426a-825e-51fe3f119917] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:39:36.647710  213005 system_pods.go:89] "etcd-auto-937200" [5dc08754-f4f8-4cfb-8daa-0d39d7ebf2af] Running
	I1029 09:39:36.647717  213005 system_pods.go:89] "kindnet-qqhf5" [baa553de-c84a-4cb4-b629-b021eb75966c] Running
	I1029 09:39:36.647722  213005 system_pods.go:89] "kube-apiserver-auto-937200" [e6297baa-bb55-41f6-8578-b5f59566062c] Running
	I1029 09:39:36.647727  213005 system_pods.go:89] "kube-controller-manager-auto-937200" [05a9a22c-272a-4597-95af-b79a3cad70a1] Running
	I1029 09:39:36.647733  213005 system_pods.go:89] "kube-proxy-dmr48" [e634703f-3441-49c6-9f33-7fd37262f5a4] Running
	I1029 09:39:36.647737  213005 system_pods.go:89] "kube-scheduler-auto-937200" [82b3f3f0-2609-40fd-a95b-09c198a4555e] Running
	I1029 09:39:36.647743  213005 system_pods.go:89] "storage-provisioner" [d3e26104-f613-42a2-accf-2a549b3a8983] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:39:36.647767  213005 retry.go:31] will retry after 303.946243ms: missing components: kube-dns
	I1029 09:39:36.955091  213005 system_pods.go:86] 8 kube-system pods found
	I1029 09:39:36.955140  213005 system_pods.go:89] "coredns-66bc5c9577-tgrw8" [73ce956b-c6ca-426a-825e-51fe3f119917] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:39:36.955147  213005 system_pods.go:89] "etcd-auto-937200" [5dc08754-f4f8-4cfb-8daa-0d39d7ebf2af] Running
	I1029 09:39:36.955153  213005 system_pods.go:89] "kindnet-qqhf5" [baa553de-c84a-4cb4-b629-b021eb75966c] Running
	I1029 09:39:36.955157  213005 system_pods.go:89] "kube-apiserver-auto-937200" [e6297baa-bb55-41f6-8578-b5f59566062c] Running
	I1029 09:39:36.955162  213005 system_pods.go:89] "kube-controller-manager-auto-937200" [05a9a22c-272a-4597-95af-b79a3cad70a1] Running
	I1029 09:39:36.955166  213005 system_pods.go:89] "kube-proxy-dmr48" [e634703f-3441-49c6-9f33-7fd37262f5a4] Running
	I1029 09:39:36.955170  213005 system_pods.go:89] "kube-scheduler-auto-937200" [82b3f3f0-2609-40fd-a95b-09c198a4555e] Running
	I1029 09:39:36.955176  213005 system_pods.go:89] "storage-provisioner" [d3e26104-f613-42a2-accf-2a549b3a8983] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:39:36.955189  213005 retry.go:31] will retry after 517.141809ms: missing components: kube-dns
	I1029 09:39:37.483240  213005 system_pods.go:86] 8 kube-system pods found
	I1029 09:39:37.483272  213005 system_pods.go:89] "coredns-66bc5c9577-tgrw8" [73ce956b-c6ca-426a-825e-51fe3f119917] Running
	I1029 09:39:37.483280  213005 system_pods.go:89] "etcd-auto-937200" [5dc08754-f4f8-4cfb-8daa-0d39d7ebf2af] Running
	I1029 09:39:37.483286  213005 system_pods.go:89] "kindnet-qqhf5" [baa553de-c84a-4cb4-b629-b021eb75966c] Running
	I1029 09:39:37.483291  213005 system_pods.go:89] "kube-apiserver-auto-937200" [e6297baa-bb55-41f6-8578-b5f59566062c] Running
	I1029 09:39:37.483295  213005 system_pods.go:89] "kube-controller-manager-auto-937200" [05a9a22c-272a-4597-95af-b79a3cad70a1] Running
	I1029 09:39:37.483299  213005 system_pods.go:89] "kube-proxy-dmr48" [e634703f-3441-49c6-9f33-7fd37262f5a4] Running
	I1029 09:39:37.483304  213005 system_pods.go:89] "kube-scheduler-auto-937200" [82b3f3f0-2609-40fd-a95b-09c198a4555e] Running
	I1029 09:39:37.483308  213005 system_pods.go:89] "storage-provisioner" [d3e26104-f613-42a2-accf-2a549b3a8983] Running
	I1029 09:39:37.483315  213005 system_pods.go:126] duration metric: took 1.386452299s to wait for k8s-apps to be running ...
	I1029 09:39:37.483328  213005 system_svc.go:44] waiting for kubelet service to be running ....
	I1029 09:39:37.483394  213005 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:39:37.502234  213005 system_svc.go:56] duration metric: took 18.894852ms WaitForService to wait for kubelet
	I1029 09:39:37.502260  213005 kubeadm.go:587] duration metric: took 42.518096204s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:39:37.502280  213005 node_conditions.go:102] verifying NodePressure condition ...
	I1029 09:39:37.505971  213005 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1029 09:39:37.506008  213005 node_conditions.go:123] node cpu capacity is 2
	I1029 09:39:37.506023  213005 node_conditions.go:105] duration metric: took 3.683882ms to run NodePressure ...
	I1029 09:39:37.506036  213005 start.go:242] waiting for startup goroutines ...
	I1029 09:39:37.506044  213005 start.go:247] waiting for cluster config update ...
	I1029 09:39:37.506055  213005 start.go:256] writing updated cluster config ...
	I1029 09:39:37.506354  213005 ssh_runner.go:195] Run: rm -f paused
	I1029 09:39:37.510187  213005 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:39:37.581359  213005 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tgrw8" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:37.586202  213005 pod_ready.go:94] pod "coredns-66bc5c9577-tgrw8" is "Ready"
	I1029 09:39:37.586230  213005 pod_ready.go:86] duration metric: took 4.84203ms for pod "coredns-66bc5c9577-tgrw8" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:37.588913  213005 pod_ready.go:83] waiting for pod "etcd-auto-937200" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:37.593382  213005 pod_ready.go:94] pod "etcd-auto-937200" is "Ready"
	I1029 09:39:37.593452  213005 pod_ready.go:86] duration metric: took 4.509367ms for pod "etcd-auto-937200" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:37.595574  213005 pod_ready.go:83] waiting for pod "kube-apiserver-auto-937200" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:37.600067  213005 pod_ready.go:94] pod "kube-apiserver-auto-937200" is "Ready"
	I1029 09:39:37.600093  213005 pod_ready.go:86] duration metric: took 4.49614ms for pod "kube-apiserver-auto-937200" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:37.602471  213005 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-937200" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:37.914842  213005 pod_ready.go:94] pod "kube-controller-manager-auto-937200" is "Ready"
	I1029 09:39:37.914878  213005 pod_ready.go:86] duration metric: took 312.383814ms for pod "kube-controller-manager-auto-937200" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:38.116065  213005 pod_ready.go:83] waiting for pod "kube-proxy-dmr48" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:38.514515  213005 pod_ready.go:94] pod "kube-proxy-dmr48" is "Ready"
	I1029 09:39:38.514552  213005 pod_ready.go:86] duration metric: took 398.457037ms for pod "kube-proxy-dmr48" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:38.715280  213005 pod_ready.go:83] waiting for pod "kube-scheduler-auto-937200" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:39.114824  213005 pod_ready.go:94] pod "kube-scheduler-auto-937200" is "Ready"
	I1029 09:39:39.114855  213005 pod_ready.go:86] duration metric: took 399.550471ms for pod "kube-scheduler-auto-937200" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:39.114868  213005 pod_ready.go:40] duration metric: took 1.604647504s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:39:39.170828  213005 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1029 09:39:39.177051  213005 out.go:179] * Done! kubectl is now configured to use "auto-937200" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 29 09:39:33 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:33.152479308Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:39:33 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:33.156035681Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:39:33 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:33.156071053Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:39:33 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:33.15609402Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:39:33 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:33.159927187Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:39:33 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:33.159962576Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:39:33 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:33.159985247Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:39:33 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:33.165190054Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:39:33 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:33.165224614Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:39:33 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:33.165246038Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:39:33 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:33.179678408Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:39:33 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:33.179735688Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:39:39 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:39.280940433Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=fe891e8a-d6d0-4a3c-8071-5d6c7fb46252 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:39:39 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:39.282750075Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f52dd674-424b-4c6a-b561-60559afe0856 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:39:39 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:39.283961187Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sgx54/dashboard-metrics-scraper" id=556613ea-00c6-4431-a33f-f3e73978920e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:39:39 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:39.284079506Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:39:39 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:39.297464917Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:39:39 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:39.298298237Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:39:39 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:39.326606484Z" level=info msg="Created container def5f21481b3d0e59948f1372921bdc8212525290f254d357bd5e810b72206c5: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sgx54/dashboard-metrics-scraper" id=556613ea-00c6-4431-a33f-f3e73978920e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:39:39 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:39.331005203Z" level=info msg="Starting container: def5f21481b3d0e59948f1372921bdc8212525290f254d357bd5e810b72206c5" id=331af54f-576f-4b6b-8bac-7a0e11862ada name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:39:39 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:39.334556169Z" level=info msg="Started container" PID=1712 containerID=def5f21481b3d0e59948f1372921bdc8212525290f254d357bd5e810b72206c5 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sgx54/dashboard-metrics-scraper id=331af54f-576f-4b6b-8bac-7a0e11862ada name=/runtime.v1.RuntimeService/StartContainer sandboxID=b1e12f72104fa4e2ddfd586361e65c909e9ff57dd60886ef37199bc6e178a1b1
	Oct 29 09:39:39 default-k8s-diff-port-154565 conmon[1710]: conmon def5f21481b3d0e59948 <ninfo>: container 1712 exited with status 1
	Oct 29 09:39:39 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:39.714012557Z" level=info msg="Removing container: 7e15281b78b57f2b7921b6796a2095f5d75f11b6efcfdfee0a6b021ac26008fa" id=fc263607-7547-43ed-b0f1-282d5e4b0a87 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:39:39 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:39.739493094Z" level=info msg="Error loading conmon cgroup of container 7e15281b78b57f2b7921b6796a2095f5d75f11b6efcfdfee0a6b021ac26008fa: cgroup deleted" id=fc263607-7547-43ed-b0f1-282d5e4b0a87 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:39:39 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:39.746384964Z" level=info msg="Removed container 7e15281b78b57f2b7921b6796a2095f5d75f11b6efcfdfee0a6b021ac26008fa: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sgx54/dashboard-metrics-scraper" id=fc263607-7547-43ed-b0f1-282d5e4b0a87 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	def5f21481b3d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           6 seconds ago        Exited              dashboard-metrics-scraper   3                   b1e12f72104fa       dashboard-metrics-scraper-6ffb444bf9-sgx54             kubernetes-dashboard
	0c86ad951f717       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           21 seconds ago       Running             storage-provisioner         2                   b345e804a673d       storage-provisioner                                    kube-system
	47c8964204a91       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   40 seconds ago       Running             kubernetes-dashboard        0                   10acd8436c798       kubernetes-dashboard-855c9754f9-zcdsw                  kubernetes-dashboard
	c46b79795aaad       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           52 seconds ago       Running             coredns                     1                   5e0beb6b56f95       coredns-66bc5c9577-hbn59                               kube-system
	996dd46a13bd9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago       Running             kindnet-cni                 1                   b93e9c1942bd9       kindnet-btswn                                          kube-system
	40419a34f22d4       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           53 seconds ago       Running             kube-proxy                  1                   795f801432d51       kube-proxy-vxlb9                                       kube-system
	a78c14571ca50       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago       Running             busybox                     1                   ed8e58fc80bf8       busybox                                                default
	76deef5dfbe89       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           53 seconds ago       Exited              storage-provisioner         1                   b345e804a673d       storage-provisioner                                    kube-system
	4ecc87c3c4efe       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   4b0955f33583f       kube-apiserver-default-k8s-diff-port-154565            kube-system
	fac10df47d1f3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   5e443d3878e11       kube-scheduler-default-k8s-diff-port-154565            kube-system
	921026fa87ee2       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   16d3b21bf28e3       kube-controller-manager-default-k8s-diff-port-154565   kube-system
	2735bfa1503d0       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   2affffde098df       etcd-default-k8s-diff-port-154565                      kube-system
	
	
	==> coredns [c46b79795aaad08becba49a7b200667b944eb335b0b342474d42e8439a790a5d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52688 - 1155 "HINFO IN 5430665766845367308.6762811313159703522. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021407907s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-154565
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-154565
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=default-k8s-diff-port-154565
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_37_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:37:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-154565
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:39:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:39:22 +0000   Wed, 29 Oct 2025 09:37:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:39:22 +0000   Wed, 29 Oct 2025 09:37:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:39:22 +0000   Wed, 29 Oct 2025 09:37:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 09:39:22 +0000   Wed, 29 Oct 2025 09:38:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-154565
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                78efc080-8619-433f-9174-c9ba8af774f1
	  Boot ID:                    dcb5a4aa-84b0-4edf-aeb7-a96ccf6c0882
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-hbn59                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m22s
	  kube-system                 etcd-default-k8s-diff-port-154565                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m28s
	  kube-system                 kindnet-btswn                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m22s
	  kube-system                 kube-apiserver-default-k8s-diff-port-154565             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-154565    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-proxy-vxlb9                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-scheduler-default-k8s-diff-port-154565             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-sgx54              0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-zcdsw                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m21s              kube-proxy       
	  Normal   Starting                 52s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m28s              kubelet          Node default-k8s-diff-port-154565 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m28s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m28s              kubelet          Node default-k8s-diff-port-154565 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m28s              kubelet          Node default-k8s-diff-port-154565 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m28s              kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m23s              node-controller  Node default-k8s-diff-port-154565 event: Registered Node default-k8s-diff-port-154565 in Controller
	  Normal   NodeReady                101s               kubelet          Node default-k8s-diff-port-154565 status is now: NodeReady
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 62s)  kubelet          Node default-k8s-diff-port-154565 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 62s)  kubelet          Node default-k8s-diff-port-154565 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 62s)  kubelet          Node default-k8s-diff-port-154565 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           51s                node-controller  Node default-k8s-diff-port-154565 event: Registered Node default-k8s-diff-port-154565 in Controller
	
	
	==> dmesg <==
	[ +18.424492] overlayfs: idmapped layers are currently not supported
	[  +4.342269] hrtimer: interrupt took 2289025 ns
	[Oct29 09:12] overlayfs: idmapped layers are currently not supported
	[Oct29 09:13] overlayfs: idmapped layers are currently not supported
	[Oct29 09:14] overlayfs: idmapped layers are currently not supported
	[Oct29 09:20] overlayfs: idmapped layers are currently not supported
	[Oct29 09:23] overlayfs: idmapped layers are currently not supported
	[Oct29 09:24] overlayfs: idmapped layers are currently not supported
	[ +30.917844] overlayfs: idmapped layers are currently not supported
	[Oct29 09:27] overlayfs: idmapped layers are currently not supported
	[Oct29 09:29] overlayfs: idmapped layers are currently not supported
	[Oct29 09:30] overlayfs: idmapped layers are currently not supported
	[  +5.608805] overlayfs: idmapped layers are currently not supported
	[ +37.422429] overlayfs: idmapped layers are currently not supported
	[Oct29 09:31] overlayfs: idmapped layers are currently not supported
	[Oct29 09:32] overlayfs: idmapped layers are currently not supported
	[Oct29 09:34] overlayfs: idmapped layers are currently not supported
	[ +22.728709] overlayfs: idmapped layers are currently not supported
	[Oct29 09:35] overlayfs: idmapped layers are currently not supported
	[ +21.902387] overlayfs: idmapped layers are currently not supported
	[Oct29 09:37] overlayfs: idmapped layers are currently not supported
	[ +19.842209] overlayfs: idmapped layers are currently not supported
	[ +25.062735] overlayfs: idmapped layers are currently not supported
	[Oct29 09:38] overlayfs: idmapped layers are currently not supported
	[  +5.356953] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [2735bfa1503d05a45f458d45439f5d361379ddf5a1c72b94147b431a43b261c5] <==
	{"level":"warn","ts":"2025-10-29T09:38:48.890172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:48.994710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.032646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.082761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.201122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.236210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.356365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.368494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.418564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.476580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.512406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.566212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.608548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.645611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.722052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.779186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.832928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.906355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.929821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.954463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.988727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:50.046049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:50.065039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:50.097035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:50.244436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47768","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:39:45 up  1:22,  0 user,  load average: 3.09, 3.70, 3.04
	Linux default-k8s-diff-port-154565 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [996dd46a13bd9c4fbc716e270a5ee2bfd1b8ca9b3678e68b888aa222415a9866] <==
	I1029 09:38:52.861067       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:38:52.861639       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1029 09:38:52.861837       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:38:52.861890       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:38:52.861926       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:38:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:38:53.146736       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:38:53.146754       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:38:53.146761       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:38:53.146886       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1029 09:39:23.146842       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1029 09:39:23.146914       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1029 09:39:23.147080       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1029 09:39:23.147708       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1029 09:39:24.746909       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 09:39:24.746942       1 metrics.go:72] Registering metrics
	I1029 09:39:24.747012       1 controller.go:711] "Syncing nftables rules"
	I1029 09:39:33.148403       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1029 09:39:33.148468       1 main.go:301] handling current node
	I1029 09:39:43.148652       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1029 09:39:43.148698       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4ecc87c3c4efebb87e8579fe30d41b373305c1560267c5e5c1c7e4f651d75911] <==
	I1029 09:38:51.491102       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1029 09:38:51.491268       1 aggregator.go:171] initial CRD sync complete...
	I1029 09:38:51.491285       1 autoregister_controller.go:144] Starting autoregister controller
	I1029 09:38:51.491312       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1029 09:38:51.491319       1 cache.go:39] Caches are synced for autoregister controller
	I1029 09:38:51.539058       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1029 09:38:51.541687       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1029 09:38:51.562331       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1029 09:38:51.562357       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1029 09:38:51.574645       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1029 09:38:51.591842       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1029 09:38:51.591878       1 policy_source.go:240] refreshing policies
	I1029 09:38:51.598890       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1029 09:38:51.613104       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:38:52.205889       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 09:38:52.232771       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:38:52.699977       1 controller.go:667] quota admission added evaluator for: namespaces
	I1029 09:38:52.946351       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1029 09:38:53.072252       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:38:53.111433       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 09:38:53.248666       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.16.56"}
	I1029 09:38:53.304004       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.83.233"}
	I1029 09:38:54.934032       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1029 09:38:55.323157       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1029 09:38:55.372214       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [921026fa87ee220227613d52ff56bc6b3408a4d844d6176f9493e6f447ed8e33] <==
	I1029 09:38:54.917146       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1029 09:38:54.917203       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1029 09:38:54.917281       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:38:54.917310       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1029 09:38:54.917341       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1029 09:38:54.918545       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1029 09:38:54.927625       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1029 09:38:54.934182       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1029 09:38:54.937737       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1029 09:38:54.938344       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:38:54.941697       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1029 09:38:54.950494       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1029 09:38:54.957705       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1029 09:38:54.967811       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1029 09:38:54.967959       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1029 09:38:54.967978       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1029 09:38:54.967986       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1029 09:38:54.967999       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1029 09:38:54.968007       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1029 09:38:54.970155       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:38:54.970924       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1029 09:38:54.970940       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1029 09:38:54.990000       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1029 09:38:54.996414       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:38:55.000478       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [40419a34f22d499b5e10f2817ca3190043cf4654975faa221907811657572319] <==
	I1029 09:38:53.018433       1 server_linux.go:53] "Using iptables proxy"
	I1029 09:38:53.383110       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 09:38:53.487748       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 09:38:53.487788       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1029 09:38:53.487878       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 09:38:53.556645       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:38:53.556709       1 server_linux.go:132] "Using iptables Proxier"
	I1029 09:38:53.561357       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 09:38:53.561675       1 server.go:527] "Version info" version="v1.34.1"
	I1029 09:38:53.561714       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:38:53.576180       1 config.go:106] "Starting endpoint slice config controller"
	I1029 09:38:53.576215       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 09:38:53.582512       1 config.go:200] "Starting service config controller"
	I1029 09:38:53.582594       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 09:38:53.582908       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 09:38:53.582949       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 09:38:53.603037       1 config.go:309] "Starting node config controller"
	I1029 09:38:53.608409       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 09:38:53.608517       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 09:38:53.677278       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1029 09:38:53.683658       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 09:38:53.683753       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [fac10df47d1f3807c7e226078bc5907e12ab5e525c2712d52627272075aad944] <==
	I1029 09:38:48.810770       1 serving.go:386] Generated self-signed cert in-memory
	W1029 09:38:51.356557       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1029 09:38:51.356660       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1029 09:38:51.356694       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1029 09:38:51.356724       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1029 09:38:51.505912       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1029 09:38:51.505946       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:38:51.535446       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1029 09:38:51.535575       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:38:51.535595       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:38:51.535616       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1029 09:38:51.635673       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 09:38:56 default-k8s-diff-port-154565 kubelet[770]: W1029 09:38:56.068918     770 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/dfc2c419fe4814caa3f5a0bc7a2ac4b24be2798cd8aa60ffeba23c8cc25c3683/crio-10acd8436c7984e73ec091b4fb8d2c7ada9c89275fdf6d1472b521b17a94f5f9 WatchSource:0}: Error finding container 10acd8436c7984e73ec091b4fb8d2c7ada9c89275fdf6d1472b521b17a94f5f9: Status 404 returned error can't find the container with id 10acd8436c7984e73ec091b4fb8d2c7ada9c89275fdf6d1472b521b17a94f5f9
	Oct 29 09:38:58 default-k8s-diff-port-154565 kubelet[770]: I1029 09:38:58.588527     770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 29 09:39:01 default-k8s-diff-port-154565 kubelet[770]: I1029 09:39:01.598669     770 scope.go:117] "RemoveContainer" containerID="e91fb5813d5aa100fa5522a4e147779a083cac8ced044593db14f370d56dc385"
	Oct 29 09:39:02 default-k8s-diff-port-154565 kubelet[770]: I1029 09:39:02.603191     770 scope.go:117] "RemoveContainer" containerID="e91fb5813d5aa100fa5522a4e147779a083cac8ced044593db14f370d56dc385"
	Oct 29 09:39:02 default-k8s-diff-port-154565 kubelet[770]: I1029 09:39:02.609204     770 scope.go:117] "RemoveContainer" containerID="3fd2ca35b7976e2d5cde1d20250d63cf8a3843e39fe76e9b2a90f0cea935c5ce"
	Oct 29 09:39:02 default-k8s-diff-port-154565 kubelet[770]: E1029 09:39:02.609428     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sgx54_kubernetes-dashboard(f838510f-cd85-4a23-be9a-9b0b86aee2e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sgx54" podUID="f838510f-cd85-4a23-be9a-9b0b86aee2e3"
	Oct 29 09:39:03 default-k8s-diff-port-154565 kubelet[770]: I1029 09:39:03.606972     770 scope.go:117] "RemoveContainer" containerID="3fd2ca35b7976e2d5cde1d20250d63cf8a3843e39fe76e9b2a90f0cea935c5ce"
	Oct 29 09:39:03 default-k8s-diff-port-154565 kubelet[770]: E1029 09:39:03.607181     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sgx54_kubernetes-dashboard(f838510f-cd85-4a23-be9a-9b0b86aee2e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sgx54" podUID="f838510f-cd85-4a23-be9a-9b0b86aee2e3"
	Oct 29 09:39:05 default-k8s-diff-port-154565 kubelet[770]: I1029 09:39:05.949573     770 scope.go:117] "RemoveContainer" containerID="3fd2ca35b7976e2d5cde1d20250d63cf8a3843e39fe76e9b2a90f0cea935c5ce"
	Oct 29 09:39:05 default-k8s-diff-port-154565 kubelet[770]: E1029 09:39:05.950186     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sgx54_kubernetes-dashboard(f838510f-cd85-4a23-be9a-9b0b86aee2e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sgx54" podUID="f838510f-cd85-4a23-be9a-9b0b86aee2e3"
	Oct 29 09:39:16 default-k8s-diff-port-154565 kubelet[770]: I1029 09:39:16.279284     770 scope.go:117] "RemoveContainer" containerID="3fd2ca35b7976e2d5cde1d20250d63cf8a3843e39fe76e9b2a90f0cea935c5ce"
	Oct 29 09:39:16 default-k8s-diff-port-154565 kubelet[770]: I1029 09:39:16.643476     770 scope.go:117] "RemoveContainer" containerID="3fd2ca35b7976e2d5cde1d20250d63cf8a3843e39fe76e9b2a90f0cea935c5ce"
	Oct 29 09:39:16 default-k8s-diff-port-154565 kubelet[770]: I1029 09:39:16.643676     770 scope.go:117] "RemoveContainer" containerID="7e15281b78b57f2b7921b6796a2095f5d75f11b6efcfdfee0a6b021ac26008fa"
	Oct 29 09:39:16 default-k8s-diff-port-154565 kubelet[770]: E1029 09:39:16.643828     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sgx54_kubernetes-dashboard(f838510f-cd85-4a23-be9a-9b0b86aee2e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sgx54" podUID="f838510f-cd85-4a23-be9a-9b0b86aee2e3"
	Oct 29 09:39:16 default-k8s-diff-port-154565 kubelet[770]: I1029 09:39:16.665979     770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zcdsw" podStartSLOduration=12.226904052 podStartE2EDuration="21.665961076s" podCreationTimestamp="2025-10-29 09:38:55 +0000 UTC" firstStartedPulling="2025-10-29 09:38:56.095036277 +0000 UTC m=+13.287465013" lastFinishedPulling="2025-10-29 09:39:05.534093301 +0000 UTC m=+22.726522037" observedRunningTime="2025-10-29 09:39:05.628157567 +0000 UTC m=+22.820586327" watchObservedRunningTime="2025-10-29 09:39:16.665961076 +0000 UTC m=+33.858389812"
	Oct 29 09:39:23 default-k8s-diff-port-154565 kubelet[770]: I1029 09:39:23.663188     770 scope.go:117] "RemoveContainer" containerID="76deef5dfbe8964470407b18cf7e6c413662b0b3a9ea20f0b1ebd6bb5b990471"
	Oct 29 09:39:25 default-k8s-diff-port-154565 kubelet[770]: I1029 09:39:25.949137     770 scope.go:117] "RemoveContainer" containerID="7e15281b78b57f2b7921b6796a2095f5d75f11b6efcfdfee0a6b021ac26008fa"
	Oct 29 09:39:25 default-k8s-diff-port-154565 kubelet[770]: E1029 09:39:25.949884     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sgx54_kubernetes-dashboard(f838510f-cd85-4a23-be9a-9b0b86aee2e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sgx54" podUID="f838510f-cd85-4a23-be9a-9b0b86aee2e3"
	Oct 29 09:39:39 default-k8s-diff-port-154565 kubelet[770]: I1029 09:39:39.279570     770 scope.go:117] "RemoveContainer" containerID="7e15281b78b57f2b7921b6796a2095f5d75f11b6efcfdfee0a6b021ac26008fa"
	Oct 29 09:39:39 default-k8s-diff-port-154565 kubelet[770]: I1029 09:39:39.706910     770 scope.go:117] "RemoveContainer" containerID="7e15281b78b57f2b7921b6796a2095f5d75f11b6efcfdfee0a6b021ac26008fa"
	Oct 29 09:39:39 default-k8s-diff-port-154565 kubelet[770]: I1029 09:39:39.707833     770 scope.go:117] "RemoveContainer" containerID="def5f21481b3d0e59948f1372921bdc8212525290f254d357bd5e810b72206c5"
	Oct 29 09:39:39 default-k8s-diff-port-154565 kubelet[770]: E1029 09:39:39.709780     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sgx54_kubernetes-dashboard(f838510f-cd85-4a23-be9a-9b0b86aee2e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sgx54" podUID="f838510f-cd85-4a23-be9a-9b0b86aee2e3"
	Oct 29 09:39:43 default-k8s-diff-port-154565 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 29 09:39:43 default-k8s-diff-port-154565 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 29 09:39:43 default-k8s-diff-port-154565 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [47c8964204a91d0b46d5e4ff09a253ddec6adc122582f93a5497e300ab1bf5ea] <==
	2025/10/29 09:39:05 Using namespace: kubernetes-dashboard
	2025/10/29 09:39:05 Using in-cluster config to connect to apiserver
	2025/10/29 09:39:05 Using secret token for csrf signing
	2025/10/29 09:39:05 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/29 09:39:05 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/29 09:39:05 Successful initial request to the apiserver, version: v1.34.1
	2025/10/29 09:39:05 Generating JWE encryption key
	2025/10/29 09:39:05 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/29 09:39:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/29 09:39:06 Initializing JWE encryption key from synchronized object
	2025/10/29 09:39:06 Creating in-cluster Sidecar client
	2025/10/29 09:39:06 Serving insecurely on HTTP port: 9090
	2025/10/29 09:39:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/29 09:39:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/29 09:39:05 Starting overwatch
	
	
	==> storage-provisioner [0c86ad951f717e434b3bc0751b40d09aee480039cdbb2d71d3b5aba02ca39db8] <==
	I1029 09:39:23.713452       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1029 09:39:23.726020       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1029 09:39:23.726076       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1029 09:39:23.729289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:39:27.184387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:39:31.444993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:39:35.042890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:39:38.097217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:39:41.119560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:39:41.124953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:39:41.125337       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1029 09:39:41.125562       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-154565_c6f440f5-071e-4166-81ce-7160908dbf51!
	I1029 09:39:41.125723       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"eb2a2ad0-3fcc-4033-a090-3abddb1b193f", APIVersion:"v1", ResourceVersion:"654", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-154565_c6f440f5-071e-4166-81ce-7160908dbf51 became leader
	W1029 09:39:41.131208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:39:41.143560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:39:41.232025       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-154565_c6f440f5-071e-4166-81ce-7160908dbf51!
	W1029 09:39:43.147204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:39:43.158608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:39:45.164663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:39:45.173017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [76deef5dfbe8964470407b18cf7e6c413662b0b3a9ea20f0b1ebd6bb5b990471] <==
	I1029 09:38:52.882702       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1029 09:39:22.887489       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-154565 -n default-k8s-diff-port-154565
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-154565 -n default-k8s-diff-port-154565: exit status 2 (374.757763ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-154565 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-154565
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-154565:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dfc2c419fe4814caa3f5a0bc7a2ac4b24be2798cd8aa60ffeba23c8cc25c3683",
	        "Created": "2025-10-29T09:36:47.880643174Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 215785,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:38:33.507378256Z",
	            "FinishedAt": "2025-10-29T09:38:32.461497778Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/dfc2c419fe4814caa3f5a0bc7a2ac4b24be2798cd8aa60ffeba23c8cc25c3683/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dfc2c419fe4814caa3f5a0bc7a2ac4b24be2798cd8aa60ffeba23c8cc25c3683/hostname",
	        "HostsPath": "/var/lib/docker/containers/dfc2c419fe4814caa3f5a0bc7a2ac4b24be2798cd8aa60ffeba23c8cc25c3683/hosts",
	        "LogPath": "/var/lib/docker/containers/dfc2c419fe4814caa3f5a0bc7a2ac4b24be2798cd8aa60ffeba23c8cc25c3683/dfc2c419fe4814caa3f5a0bc7a2ac4b24be2798cd8aa60ffeba23c8cc25c3683-json.log",
	        "Name": "/default-k8s-diff-port-154565",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-154565:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-154565",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dfc2c419fe4814caa3f5a0bc7a2ac4b24be2798cd8aa60ffeba23c8cc25c3683",
	                "LowerDir": "/var/lib/docker/overlay2/9ba4e32c5a57a2a0e65d7ce595e96b480a301690f5c728e704090e910736b869-init/diff:/var/lib/docker/overlay2/512c003c31c6c889d7180677d0a7d04f9641d651bb7c0f829b95ca7e47c2836c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9ba4e32c5a57a2a0e65d7ce595e96b480a301690f5c728e704090e910736b869/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9ba4e32c5a57a2a0e65d7ce595e96b480a301690f5c728e704090e910736b869/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9ba4e32c5a57a2a0e65d7ce595e96b480a301690f5c728e704090e910736b869/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-154565",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-154565/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-154565",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-154565",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-154565",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "179e0304474060f405a3c52d398d589dd009fd7a533a53bc11bbcde9ddcc8032",
	            "SandboxKey": "/var/run/docker/netns/179e03044740",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-154565": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:b9:94:ac:ea:c9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c3acff3dac19998e01d626c0b1e4f259c12319017d7e423e1cda5eea55f18a36",
	                    "EndpointID": "b1495ab6fb0a93308e73d603f9a4c31895427f281e940bc3577649d2c16229c1",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-154565",
	                        "dfc2c419fe48"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-154565 -n default-k8s-diff-port-154565
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-154565 -n default-k8s-diff-port-154565: exit status 2 (342.077925ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-154565 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-154565 logs -n 25: (1.289802942s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p no-preload-505993                                                                                                                                                                                                                          │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ delete  │ -p no-preload-505993                                                                                                                                                                                                                          │ no-preload-505993            │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ delete  │ -p disable-driver-mounts-012564                                                                                                                                                                                                               │ disable-driver-mounts-012564 │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ start   │ -p default-k8s-diff-port-154565 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-154565 │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:38 UTC │
	│ image   │ embed-certs-946178 image list --format=json                                                                                                                                                                                                   │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:36 UTC │
	│ pause   │ -p embed-certs-946178 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │                     │
	│ delete  │ -p embed-certs-946178                                                                                                                                                                                                                         │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:36 UTC │ 29 Oct 25 09:37 UTC │
	│ delete  │ -p embed-certs-946178                                                                                                                                                                                                                         │ embed-certs-946178           │ jenkins │ v1.37.0 │ 29 Oct 25 09:37 UTC │ 29 Oct 25 09:37 UTC │
	│ start   │ -p newest-cni-194729 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:37 UTC │ 29 Oct 25 09:37 UTC │
	│ addons  │ enable metrics-server -p newest-cni-194729 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:37 UTC │                     │
	│ stop    │ -p newest-cni-194729 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:37 UTC │ 29 Oct 25 09:37 UTC │
	│ addons  │ enable dashboard -p newest-cni-194729 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:37 UTC │ 29 Oct 25 09:37 UTC │
	│ start   │ -p newest-cni-194729 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:37 UTC │ 29 Oct 25 09:38 UTC │
	│ image   │ newest-cni-194729 image list --format=json                                                                                                                                                                                                    │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:38 UTC │ 29 Oct 25 09:38 UTC │
	│ pause   │ -p newest-cni-194729 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:38 UTC │                     │
	│ delete  │ -p newest-cni-194729                                                                                                                                                                                                                          │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:38 UTC │ 29 Oct 25 09:38 UTC │
	│ delete  │ -p newest-cni-194729                                                                                                                                                                                                                          │ newest-cni-194729            │ jenkins │ v1.37.0 │ 29 Oct 25 09:38 UTC │ 29 Oct 25 09:38 UTC │
	│ start   │ -p auto-937200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-937200                  │ jenkins │ v1.37.0 │ 29 Oct 25 09:38 UTC │ 29 Oct 25 09:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-154565 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-154565 │ jenkins │ v1.37.0 │ 29 Oct 25 09:38 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-154565 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-154565 │ jenkins │ v1.37.0 │ 29 Oct 25 09:38 UTC │ 29 Oct 25 09:38 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-154565 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-154565 │ jenkins │ v1.37.0 │ 29 Oct 25 09:38 UTC │ 29 Oct 25 09:38 UTC │
	│ start   │ -p default-k8s-diff-port-154565 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-154565 │ jenkins │ v1.37.0 │ 29 Oct 25 09:38 UTC │ 29 Oct 25 09:39 UTC │
	│ ssh     │ -p auto-937200 pgrep -a kubelet                                                                                                                                                                                                               │ auto-937200                  │ jenkins │ v1.37.0 │ 29 Oct 25 09:39 UTC │ 29 Oct 25 09:39 UTC │
	│ image   │ default-k8s-diff-port-154565 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-154565 │ jenkins │ v1.37.0 │ 29 Oct 25 09:39 UTC │ 29 Oct 25 09:39 UTC │
	│ pause   │ -p default-k8s-diff-port-154565 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-154565 │ jenkins │ v1.37.0 │ 29 Oct 25 09:39 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:38:33
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:38:33.099808  215661 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:38:33.100410  215661 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:38:33.100467  215661 out.go:374] Setting ErrFile to fd 2...
	I1029 09:38:33.100486  215661 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:38:33.100778  215661 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 09:38:33.101194  215661 out.go:368] Setting JSON to false
	I1029 09:38:33.102112  215661 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4865,"bootTime":1761725848,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1029 09:38:33.102209  215661 start.go:143] virtualization:  
	I1029 09:38:33.107184  215661 out.go:179] * [default-k8s-diff-port-154565] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1029 09:38:33.110319  215661 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:38:33.110386  215661 notify.go:221] Checking for updates...
	I1029 09:38:33.116156  215661 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:38:33.119006  215661 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:38:33.121985  215661 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	I1029 09:38:33.124803  215661 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1029 09:38:33.127744  215661 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:38:33.131015  215661 config.go:182] Loaded profile config "default-k8s-diff-port-154565": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:38:33.131636  215661 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:38:33.172463  215661 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1029 09:38:33.172606  215661 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:38:33.290075  215661 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-29 09:38:33.277358778 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:38:33.290215  215661 docker.go:319] overlay module found
	I1029 09:38:33.293210  215661 out.go:179] * Using the docker driver based on existing profile
	I1029 09:38:33.296028  215661 start.go:309] selected driver: docker
	I1029 09:38:33.296043  215661 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-154565 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-154565 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:38:33.296144  215661 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:38:33.296878  215661 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:38:33.396111  215661 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-29 09:38:33.381526548 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:38:33.396538  215661 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:38:33.396575  215661 cni.go:84] Creating CNI manager for ""
	I1029 09:38:33.396626  215661 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:38:33.396663  215661 start.go:353] cluster config:
	{Name:default-k8s-diff-port-154565 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-154565 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:38:33.399680  215661 out.go:179] * Starting "default-k8s-diff-port-154565" primary control-plane node in "default-k8s-diff-port-154565" cluster
	I1029 09:38:33.402480  215661 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:38:33.405370  215661 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:38:33.408122  215661 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:38:33.408188  215661 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1029 09:38:33.408202  215661 cache.go:59] Caching tarball of preloaded images
	I1029 09:38:33.408288  215661 preload.go:233] Found /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1029 09:38:33.408303  215661 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 09:38:33.408372  215661 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:38:33.408684  215661 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/config.json ...
	I1029 09:38:33.441170  215661 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:38:33.441189  215661 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:38:33.441203  215661 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:38:33.441224  215661 start.go:360] acquireMachinesLock for default-k8s-diff-port-154565: {Name:mk949f3a944b6d0d5624c677fdcfbf59ea2f05b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:38:33.441294  215661 start.go:364] duration metric: took 45.334µs to acquireMachinesLock for "default-k8s-diff-port-154565"
	I1029 09:38:33.441313  215661 start.go:96] Skipping create...Using existing machine configuration
	I1029 09:38:33.441318  215661 fix.go:54] fixHost starting: 
	I1029 09:38:33.441579  215661 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-154565 --format={{.State.Status}}
	I1029 09:38:33.462053  215661 fix.go:112] recreateIfNeeded on default-k8s-diff-port-154565: state=Stopped err=<nil>
	W1029 09:38:33.462083  215661 fix.go:138] unexpected machine state, will restart: <nil>
	I1029 09:38:30.686547  213005 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1029 09:38:31.315023  213005 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1029 09:38:31.315657  213005 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1029 09:38:32.230104  213005 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1029 09:38:32.584571  213005 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1029 09:38:32.841451  213005 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1029 09:38:34.026135  213005 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1029 09:38:35.344476  213005 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1029 09:38:35.344577  213005 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1029 09:38:35.353421  213005 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1029 09:38:35.357171  213005 out.go:252]   - Booting up control plane ...
	I1029 09:38:35.357294  213005 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1029 09:38:35.363390  213005 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1029 09:38:35.363519  213005 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1029 09:38:35.381411  213005 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1029 09:38:35.381529  213005 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1029 09:38:35.388709  213005 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1029 09:38:35.388999  213005 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1029 09:38:35.389186  213005 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1029 09:38:33.465500  215661 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-154565" ...
	I1029 09:38:33.465581  215661 cli_runner.go:164] Run: docker start default-k8s-diff-port-154565
	I1029 09:38:33.806970  215661 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-154565 --format={{.State.Status}}
	I1029 09:38:33.830149  215661 kic.go:430] container "default-k8s-diff-port-154565" state is running.
	I1029 09:38:33.830888  215661 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-154565
	I1029 09:38:33.854870  215661 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/config.json ...
	I1029 09:38:33.855103  215661 machine.go:94] provisionDockerMachine start ...
	I1029 09:38:33.855158  215661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:38:33.886589  215661 main.go:143] libmachine: Using SSH client type: native
	I1029 09:38:33.886913  215661 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1029 09:38:33.886922  215661 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 09:38:33.887632  215661 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42996->127.0.0.1:33093: read: connection reset by peer
	I1029 09:38:37.044662  215661 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-154565
	
	I1029 09:38:37.044699  215661 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-154565"
	I1029 09:38:37.044806  215661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:38:37.067918  215661 main.go:143] libmachine: Using SSH client type: native
	I1029 09:38:37.068227  215661 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1029 09:38:37.068245  215661 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-154565 && echo "default-k8s-diff-port-154565" | sudo tee /etc/hostname
	I1029 09:38:37.235076  215661 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-154565
	
	I1029 09:38:37.235205  215661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:38:37.258378  215661 main.go:143] libmachine: Using SSH client type: native
	I1029 09:38:37.258691  215661 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1029 09:38:37.258711  215661 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-154565' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-154565/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-154565' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 09:38:37.427782  215661 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 09:38:37.427862  215661 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-2763/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-2763/.minikube}
	I1029 09:38:37.427921  215661 ubuntu.go:190] setting up certificates
	I1029 09:38:37.427966  215661 provision.go:84] configureAuth start
	I1029 09:38:37.428051  215661 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-154565
	I1029 09:38:37.457347  215661 provision.go:143] copyHostCerts
	I1029 09:38:37.457459  215661 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem, removing ...
	I1029 09:38:37.457475  215661 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem
	I1029 09:38:37.457552  215661 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/ca.pem (1082 bytes)
	I1029 09:38:37.457657  215661 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem, removing ...
	I1029 09:38:37.457662  215661 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem
	I1029 09:38:37.457687  215661 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/cert.pem (1123 bytes)
	I1029 09:38:37.457745  215661 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem, removing ...
	I1029 09:38:37.457749  215661 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem
	I1029 09:38:37.457773  215661 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-2763/.minikube/key.pem (1679 bytes)
	I1029 09:38:37.457825  215661 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-154565 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-154565 localhost minikube]
	I1029 09:38:35.564761  213005 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1029 09:38:35.564894  213005 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1029 09:38:38.068083  213005 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.502064165s
	I1029 09:38:38.070090  213005 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1029 09:38:38.070449  213005 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1029 09:38:38.070817  213005 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1029 09:38:38.072085  213005 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1029 09:38:38.854299  215661 provision.go:177] copyRemoteCerts
	I1029 09:38:38.854396  215661 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 09:38:38.854465  215661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:38:38.872181  215661 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/default-k8s-diff-port-154565/id_rsa Username:docker}
	I1029 09:38:38.998013  215661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 09:38:39.037116  215661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1029 09:38:39.082200  215661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 09:38:39.122523  215661 provision.go:87] duration metric: took 1.694518382s to configureAuth
	I1029 09:38:39.122611  215661 ubuntu.go:206] setting minikube options for container-runtime
	I1029 09:38:39.122862  215661 config.go:182] Loaded profile config "default-k8s-diff-port-154565": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:38:39.123040  215661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:38:39.167553  215661 main.go:143] libmachine: Using SSH client type: native
	I1029 09:38:39.167912  215661 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1029 09:38:39.167926  215661 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 09:38:39.673548  215661 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 09:38:39.673572  215661 machine.go:97] duration metric: took 5.81845959s to provisionDockerMachine
	I1029 09:38:39.673582  215661 start.go:293] postStartSetup for "default-k8s-diff-port-154565" (driver="docker")
	I1029 09:38:39.673593  215661 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 09:38:39.673722  215661 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 09:38:39.673782  215661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:38:39.705688  215661 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/default-k8s-diff-port-154565/id_rsa Username:docker}
	I1029 09:38:39.835879  215661 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 09:38:39.845399  215661 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 09:38:39.845430  215661 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 09:38:39.845449  215661 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/addons for local assets ...
	I1029 09:38:39.845515  215661 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-2763/.minikube/files for local assets ...
	I1029 09:38:39.845608  215661 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem -> 45502.pem in /etc/ssl/certs
	I1029 09:38:39.845719  215661 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 09:38:39.862855  215661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /etc/ssl/certs/45502.pem (1708 bytes)
	I1029 09:38:39.901228  215661 start.go:296] duration metric: took 227.62951ms for postStartSetup
	I1029 09:38:39.901402  215661 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 09:38:39.901503  215661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:38:39.934934  215661 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/default-k8s-diff-port-154565/id_rsa Username:docker}
	I1029 09:38:40.050493  215661 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 09:38:40.056090  215661 fix.go:56] duration metric: took 6.614763856s for fixHost
	I1029 09:38:40.056111  215661 start.go:83] releasing machines lock for "default-k8s-diff-port-154565", held for 6.61480919s
	I1029 09:38:40.056180  215661 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-154565
	I1029 09:38:40.094135  215661 ssh_runner.go:195] Run: cat /version.json
	I1029 09:38:40.094186  215661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:38:40.094209  215661 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 09:38:40.094278  215661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:38:40.133876  215661 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/default-k8s-diff-port-154565/id_rsa Username:docker}
	I1029 09:38:40.135082  215661 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/default-k8s-diff-port-154565/id_rsa Username:docker}
	I1029 09:38:40.347499  215661 ssh_runner.go:195] Run: systemctl --version
	I1029 09:38:40.357034  215661 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 09:38:40.449556  215661 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 09:38:40.460864  215661 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 09:38:40.460966  215661 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 09:38:40.481089  215661 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1029 09:38:40.481113  215661 start.go:496] detecting cgroup driver to use...
	I1029 09:38:40.481175  215661 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1029 09:38:40.481263  215661 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 09:38:40.505647  215661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 09:38:40.526178  215661 docker.go:218] disabling cri-docker service (if available) ...
	I1029 09:38:40.526271  215661 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 09:38:40.553513  215661 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 09:38:40.572757  215661 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 09:38:40.762667  215661 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 09:38:40.969449  215661 docker.go:234] disabling docker service ...
	I1029 09:38:40.969545  215661 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 09:38:41.004697  215661 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 09:38:41.024892  215661 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 09:38:41.195454  215661 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 09:38:41.389826  215661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 09:38:41.416036  215661 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 09:38:41.444163  215661 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 09:38:41.444345  215661 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:38:41.460122  215661 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1029 09:38:41.460248  215661 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:38:41.471086  215661 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:38:41.482832  215661 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:38:41.494056  215661 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 09:38:41.510349  215661 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:38:41.533564  215661 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:38:41.542552  215661 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:38:41.552238  215661 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 09:38:41.560934  215661 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 09:38:41.569276  215661 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:38:41.793469  215661 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 09:38:42.017985  215661 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 09:38:42.018124  215661 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 09:38:42.025040  215661 start.go:564] Will wait 60s for crictl version
	I1029 09:38:42.025163  215661 ssh_runner.go:195] Run: which crictl
	I1029 09:38:42.030024  215661 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 09:38:42.067560  215661 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 09:38:42.067717  215661 ssh_runner.go:195] Run: crio --version
	I1029 09:38:42.138136  215661 ssh_runner.go:195] Run: crio --version
	I1029 09:38:42.195579  215661 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 09:38:42.198633  215661 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-154565 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:38:42.226154  215661 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1029 09:38:42.231162  215661 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:38:42.246674  215661 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-154565 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-154565 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 09:38:42.246794  215661 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:38:42.246856  215661 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:38:42.327359  215661 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:38:42.327385  215661 crio.go:433] Images already preloaded, skipping extraction
	I1029 09:38:42.327471  215661 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:38:42.378716  215661 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:38:42.378735  215661 cache_images.go:86] Images are preloaded, skipping loading
	I1029 09:38:42.378743  215661 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1029 09:38:42.378849  215661 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-154565 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-154565 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 09:38:42.378926  215661 ssh_runner.go:195] Run: crio config
	I1029 09:38:42.473129  215661 cni.go:84] Creating CNI manager for ""
	I1029 09:38:42.473151  215661 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:38:42.473200  215661 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1029 09:38:42.473233  215661 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-154565 NodeName:default-k8s-diff-port-154565 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 09:38:42.473409  215661 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-154565"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 09:38:42.473493  215661 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 09:38:42.483443  215661 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 09:38:42.483548  215661 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 09:38:42.497665  215661 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1029 09:38:42.520827  215661 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 09:38:42.537672  215661 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1029 09:38:42.565724  215661 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1029 09:38:42.569963  215661 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:38:42.581526  215661 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:38:42.784820  215661 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:38:42.813917  215661 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565 for IP: 192.168.76.2
	I1029 09:38:42.813989  215661 certs.go:195] generating shared ca certs ...
	I1029 09:38:42.814022  215661 certs.go:227] acquiring lock for ca certs: {Name:mk715611293338c39436fe9072c5ce0c2fb993a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:38:42.814234  215661 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key
	I1029 09:38:42.814321  215661 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key
	I1029 09:38:42.814345  215661 certs.go:257] generating profile certs ...
	I1029 09:38:42.814482  215661 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/client.key
	I1029 09:38:42.814591  215661 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/apiserver.key.f827afaa
	I1029 09:38:42.814673  215661 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/proxy-client.key
	I1029 09:38:42.814848  215661 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem (1338 bytes)
	W1029 09:38:42.814917  215661 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550_empty.pem, impossibly tiny 0 bytes
	I1029 09:38:42.814943  215661 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca-key.pem (1679 bytes)
	I1029 09:38:42.814999  215661 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/ca.pem (1082 bytes)
	I1029 09:38:42.815066  215661 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/cert.pem (1123 bytes)
	I1029 09:38:42.815111  215661 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/certs/key.pem (1679 bytes)
	I1029 09:38:42.815212  215661 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem (1708 bytes)
	I1029 09:38:42.816016  215661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 09:38:42.875334  215661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 09:38:42.921448  215661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 09:38:42.961408  215661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1029 09:38:42.992042  215661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1029 09:38:43.053821  215661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1029 09:38:43.099321  215661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 09:38:43.144803  215661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1029 09:38:43.211557  215661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/certs/4550.pem --> /usr/share/ca-certificates/4550.pem (1338 bytes)
	I1029 09:38:43.250637  215661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/ssl/certs/45502.pem --> /usr/share/ca-certificates/45502.pem (1708 bytes)
	I1029 09:38:43.300589  215661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 09:38:43.328729  215661 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 09:38:43.350562  215661 ssh_runner.go:195] Run: openssl version
	I1029 09:38:43.357868  215661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/45502.pem && ln -fs /usr/share/ca-certificates/45502.pem /etc/ssl/certs/45502.pem"
	I1029 09:38:43.370979  215661 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/45502.pem
	I1029 09:38:43.375837  215661 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:27 /usr/share/ca-certificates/45502.pem
	I1029 09:38:43.375929  215661 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/45502.pem
	I1029 09:38:43.425892  215661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/45502.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 09:38:43.434151  215661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 09:38:43.443840  215661 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:38:43.452263  215661 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:38:43.452393  215661 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:38:43.495428  215661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 09:38:43.503755  215661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4550.pem && ln -fs /usr/share/ca-certificates/4550.pem /etc/ssl/certs/4550.pem"
	I1029 09:38:43.512088  215661 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4550.pem
	I1029 09:38:43.516415  215661 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:27 /usr/share/ca-certificates/4550.pem
	I1029 09:38:43.516507  215661 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4550.pem
	I1029 09:38:43.566754  215661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4550.pem /etc/ssl/certs/51391683.0"
	I1029 09:38:43.575012  215661 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 09:38:43.579531  215661 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1029 09:38:43.623633  215661 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1029 09:38:43.666777  215661 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1029 09:38:43.756694  215661 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1029 09:38:43.837423  215661 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1029 09:38:43.909945  215661 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1029 09:38:43.997177  215661 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-154565 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-154565 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:38:43.997265  215661 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 09:38:43.997394  215661 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 09:38:44.117996  215661 cri.go:89] found id: "fac10df47d1f3807c7e226078bc5907e12ab5e525c2712d52627272075aad944"
	I1029 09:38:44.118020  215661 cri.go:89] found id: "2735bfa1503d05a45f458d45439f5d361379ddf5a1c72b94147b431a43b261c5"
	I1029 09:38:44.118025  215661 cri.go:89] found id: ""
	I1029 09:38:44.118110  215661 ssh_runner.go:195] Run: sudo runc list -f json
	W1029 09:38:44.183141  215661 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:38:44Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:38:44.183263  215661 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 09:38:44.211267  215661 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1029 09:38:44.211287  215661 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1029 09:38:44.211365  215661 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1029 09:38:44.242255  215661 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1029 09:38:44.242795  215661 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-154565" does not appear in /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:38:44.242950  215661 kubeconfig.go:62] /home/jenkins/minikube-integration/21800-2763/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-154565" cluster setting kubeconfig missing "default-k8s-diff-port-154565" context setting]
	I1029 09:38:44.243303  215661 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:38:44.245102  215661 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1029 09:38:44.260892  215661 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1029 09:38:44.260971  215661 kubeadm.go:602] duration metric: took 49.678367ms to restartPrimaryControlPlane
	I1029 09:38:44.260994  215661 kubeadm.go:403] duration metric: took 263.825425ms to StartCluster
	I1029 09:38:44.261033  215661 settings.go:142] acquiring lock: {Name:mkeba1c0d8e3656c2a05b6b6f1f81184498df216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:38:44.261126  215661 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:38:44.261820  215661 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:38:44.262078  215661 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:38:44.262455  215661 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 09:38:44.262525  215661 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-154565"
	I1029 09:38:44.262538  215661 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-154565"
	W1029 09:38:44.262544  215661 addons.go:248] addon storage-provisioner should already be in state true
	I1029 09:38:44.262564  215661 host.go:66] Checking if "default-k8s-diff-port-154565" exists ...
	I1029 09:38:44.263204  215661 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-154565 --format={{.State.Status}}
	I1029 09:38:44.263544  215661 config.go:182] Loaded profile config "default-k8s-diff-port-154565": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:38:44.263666  215661 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-154565"
	I1029 09:38:44.263735  215661 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-154565"
	I1029 09:38:44.264053  215661 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-154565 --format={{.State.Status}}
	I1029 09:38:44.264229  215661 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-154565"
	I1029 09:38:44.264273  215661 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-154565"
	W1029 09:38:44.264293  215661 addons.go:248] addon dashboard should already be in state true
	I1029 09:38:44.264343  215661 host.go:66] Checking if "default-k8s-diff-port-154565" exists ...
	I1029 09:38:44.265229  215661 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-154565 --format={{.State.Status}}
	I1029 09:38:44.267125  215661 out.go:179] * Verifying Kubernetes components...
	I1029 09:38:44.276417  215661 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:38:44.301896  215661 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 09:38:44.305103  215661 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:38:44.305126  215661 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1029 09:38:44.305191  215661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:38:44.336532  215661 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1029 09:38:44.340975  215661 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-154565"
	W1029 09:38:44.340997  215661 addons.go:248] addon default-storageclass should already be in state true
	I1029 09:38:44.341022  215661 host.go:66] Checking if "default-k8s-diff-port-154565" exists ...
	I1029 09:38:44.341524  215661 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-154565 --format={{.State.Status}}
	I1029 09:38:44.349377  215661 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/default-k8s-diff-port-154565/id_rsa Username:docker}
	I1029 09:38:44.350358  215661 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1029 09:38:43.269606  213005 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.196213385s
	I1029 09:38:44.353350  215661 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1029 09:38:44.353373  215661 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1029 09:38:44.353440  215661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:38:44.380507  215661 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1029 09:38:44.380529  215661 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1029 09:38:44.380592  215661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-154565
	I1029 09:38:44.421766  215661 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/default-k8s-diff-port-154565/id_rsa Username:docker}
	I1029 09:38:44.432538  215661 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/default-k8s-diff-port-154565/id_rsa Username:docker}
	I1029 09:38:44.781343  215661 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:38:44.829855  215661 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1029 09:38:44.832508  215661 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:38:44.885934  215661 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-154565" to be "Ready" ...
	I1029 09:38:44.892362  215661 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1029 09:38:44.892445  215661 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1029 09:38:45.046304  215661 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1029 09:38:45.046388  215661 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1029 09:38:45.171590  215661 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1029 09:38:45.171676  215661 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1029 09:38:45.350916  215661 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1029 09:38:45.350991  215661 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1029 09:38:45.397653  215661 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1029 09:38:45.397731  215661 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1029 09:38:45.446337  215661 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1029 09:38:45.446399  215661 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1029 09:38:45.477763  215661 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1029 09:38:45.477834  215661 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1029 09:38:45.503761  215661 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1029 09:38:45.503832  215661 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1029 09:38:45.545896  215661 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1029 09:38:45.545972  215661 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1029 09:38:45.606196  215661 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1029 09:38:47.347710  213005 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 9.273932906s
	I1029 09:38:48.073715  213005 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.002413045s
	I1029 09:38:48.094839  213005 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1029 09:38:48.111386  213005 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1029 09:38:48.127105  213005 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1029 09:38:48.127538  213005 kubeadm.go:319] [mark-control-plane] Marking the node auto-937200 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1029 09:38:48.140654  213005 kubeadm.go:319] [bootstrap-token] Using token: zwnbgz.6ylmjp2fugmqq52x
	I1029 09:38:48.143605  213005 out.go:252]   - Configuring RBAC rules ...
	I1029 09:38:48.143734  213005 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1029 09:38:48.148909  213005 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1029 09:38:48.163793  213005 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1029 09:38:48.170567  213005 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1029 09:38:48.175396  213005 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1029 09:38:48.182016  213005 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1029 09:38:48.480673  213005 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1029 09:38:48.977507  213005 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1029 09:38:49.495224  213005 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1029 09:38:49.497154  213005 kubeadm.go:319] 
	I1029 09:38:49.497241  213005 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1029 09:38:49.497253  213005 kubeadm.go:319] 
	I1029 09:38:49.497336  213005 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1029 09:38:49.497345  213005 kubeadm.go:319] 
	I1029 09:38:49.497372  213005 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1029 09:38:49.497439  213005 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1029 09:38:49.497500  213005 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1029 09:38:49.497509  213005 kubeadm.go:319] 
	I1029 09:38:49.497565  213005 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1029 09:38:49.497574  213005 kubeadm.go:319] 
	I1029 09:38:49.497624  213005 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1029 09:38:49.497648  213005 kubeadm.go:319] 
	I1029 09:38:49.497707  213005 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1029 09:38:49.497789  213005 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1029 09:38:49.497864  213005 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1029 09:38:49.497873  213005 kubeadm.go:319] 
	I1029 09:38:49.497961  213005 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1029 09:38:49.498045  213005 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1029 09:38:49.498054  213005 kubeadm.go:319] 
	I1029 09:38:49.498143  213005 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token zwnbgz.6ylmjp2fugmqq52x \
	I1029 09:38:49.498255  213005 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da4a5b90580f0f492e24f667f5676cec258425f736b389045aee440db981859e \
	I1029 09:38:49.498279  213005 kubeadm.go:319] 	--control-plane 
	I1029 09:38:49.498289  213005 kubeadm.go:319] 
	I1029 09:38:49.498378  213005 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1029 09:38:49.498387  213005 kubeadm.go:319] 
	I1029 09:38:49.498473  213005 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token zwnbgz.6ylmjp2fugmqq52x \
	I1029 09:38:49.498766  213005 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da4a5b90580f0f492e24f667f5676cec258425f736b389045aee440db981859e 
	I1029 09:38:49.503801  213005 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1029 09:38:49.504042  213005 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1029 09:38:49.504155  213005 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1029 09:38:49.504175  213005 cni.go:84] Creating CNI manager for ""
	I1029 09:38:49.504186  213005 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:38:49.509813  213005 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1029 09:38:49.512708  213005 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1029 09:38:49.517797  213005 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1029 09:38:49.517815  213005 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1029 09:38:49.542495  213005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1029 09:38:50.233032  213005 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1029 09:38:50.233156  213005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:38:50.233219  213005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-937200 minikube.k8s.io/updated_at=2025_10_29T09_38_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac minikube.k8s.io/name=auto-937200 minikube.k8s.io/primary=true
	I1029 09:38:51.484381  215661 node_ready.go:49] node "default-k8s-diff-port-154565" is "Ready"
	I1029 09:38:51.484408  215661 node_ready.go:38] duration metric: took 6.598404743s for node "default-k8s-diff-port-154565" to be "Ready" ...
	I1029 09:38:51.484423  215661 api_server.go:52] waiting for apiserver process to appear ...
	I1029 09:38:51.484483  215661 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:38:51.700601  215661 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.870664954s)
	I1029 09:38:53.219066  215661 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.386480547s)
	I1029 09:38:53.317215  215661 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.710937614s)
	I1029 09:38:53.317451  215661 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.832945572s)
	I1029 09:38:53.317486  215661 api_server.go:72] duration metric: took 9.055349624s to wait for apiserver process to appear ...
	I1029 09:38:53.317506  215661 api_server.go:88] waiting for apiserver healthz status ...
	I1029 09:38:53.317537  215661 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1029 09:38:53.320278  215661 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-154565 addons enable metrics-server
	
	I1029 09:38:53.323164  215661 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1029 09:38:50.753853  213005 ops.go:34] apiserver oom_adj: -16
	I1029 09:38:50.753973  213005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:38:51.254071  213005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:38:51.754695  213005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:38:52.255041  213005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:38:52.754350  213005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:38:53.254226  213005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:38:53.754422  213005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:38:54.254932  213005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:38:54.754882  213005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:38:54.982904  213005 kubeadm.go:1114] duration metric: took 4.749791789s to wait for elevateKubeSystemPrivileges
	I1029 09:38:54.982930  213005 kubeadm.go:403] duration metric: took 28.357253238s to StartCluster
	I1029 09:38:54.982946  213005 settings.go:142] acquiring lock: {Name:mkeba1c0d8e3656c2a05b6b6f1f81184498df216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:38:54.983007  213005 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:38:54.983941  213005 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/kubeconfig: {Name:mk1b537ffd51e40f3836d77ff04628163d9a9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:38:54.984142  213005 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:38:54.984299  213005 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1029 09:38:54.984567  213005 config.go:182] Loaded profile config "auto-937200": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:38:54.984599  213005 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 09:38:54.984657  213005 addons.go:70] Setting storage-provisioner=true in profile "auto-937200"
	I1029 09:38:54.984671  213005 addons.go:239] Setting addon storage-provisioner=true in "auto-937200"
	I1029 09:38:54.984692  213005 host.go:66] Checking if "auto-937200" exists ...
	I1029 09:38:54.985136  213005 addons.go:70] Setting default-storageclass=true in profile "auto-937200"
	I1029 09:38:54.985154  213005 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-937200"
	I1029 09:38:54.985564  213005 cli_runner.go:164] Run: docker container inspect auto-937200 --format={{.State.Status}}
	I1029 09:38:54.985942  213005 cli_runner.go:164] Run: docker container inspect auto-937200 --format={{.State.Status}}
	I1029 09:38:54.987646  213005 out.go:179] * Verifying Kubernetes components...
	I1029 09:38:54.997569  213005 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:38:55.024155  213005 addons.go:239] Setting addon default-storageclass=true in "auto-937200"
	I1029 09:38:55.024205  213005 host.go:66] Checking if "auto-937200" exists ...
	I1029 09:38:55.024667  213005 cli_runner.go:164] Run: docker container inspect auto-937200 --format={{.State.Status}}
	I1029 09:38:55.038974  213005 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 09:38:55.041965  213005 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:38:55.041992  213005 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1029 09:38:55.042062  213005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-937200
	I1029 09:38:55.072589  213005 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1029 09:38:55.072615  213005 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1029 09:38:55.072695  213005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-937200
	I1029 09:38:55.079218  213005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/auto-937200/id_rsa Username:docker}
	I1029 09:38:55.098506  213005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/auto-937200/id_rsa Username:docker}
	I1029 09:38:55.448946  213005 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:38:55.449192  213005 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1029 09:38:55.521203  213005 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1029 09:38:55.547208  213005 node_ready.go:35] waiting up to 15m0s for node "auto-937200" to be "Ready" ...
	I1029 09:38:55.613002  213005 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:38:56.168771  213005 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1029 09:38:56.514189  213005 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1029 09:38:53.326024  215661 addons.go:515] duration metric: took 9.063557568s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1029 09:38:53.331234  215661 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1029 09:38:53.333714  215661 api_server.go:141] control plane version: v1.34.1
	I1029 09:38:53.333777  215661 api_server.go:131] duration metric: took 16.250946ms to wait for apiserver health ...
	I1029 09:38:53.333805  215661 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 09:38:53.340204  215661 system_pods.go:59] 8 kube-system pods found
	I1029 09:38:53.340280  215661 system_pods.go:61] "coredns-66bc5c9577-hbn59" [571dd534-5c05-4ea1-b2fa-292f307b4037] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:38:53.340344  215661 system_pods.go:61] "etcd-default-k8s-diff-port-154565" [53c9dae2-fca7-4051-b461-90cb4406bce2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:38:53.340373  215661 system_pods.go:61] "kindnet-btswn" [a7737b1f-9d42-4a7d-8bd7-84911d52c5f9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1029 09:38:53.340395  215661 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-154565" [2272867f-fac7-443c-9471-ca7f7627c890] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:38:53.340422  215661 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-154565" [2430e944-a50b-4c78-8361-998a66b1a633] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:38:53.340455  215661 system_pods.go:61] "kube-proxy-vxlb9" [46793add-1a42-48cd-835c-69d4f9a1bf7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1029 09:38:53.340487  215661 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-154565" [66c1d519-8710-43bd-b90d-5bc17357ddd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:38:53.340517  215661 system_pods.go:61] "storage-provisioner" [3716ce63-bbfd-489a-a382-9c6d5dc40925] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:38:53.340544  215661 system_pods.go:74] duration metric: took 6.718239ms to wait for pod list to return data ...
	I1029 09:38:53.340592  215661 default_sa.go:34] waiting for default service account to be created ...
	I1029 09:38:53.343572  215661 default_sa.go:45] found service account: "default"
	I1029 09:38:53.343639  215661 default_sa.go:55] duration metric: took 3.027006ms for default service account to be created ...
	I1029 09:38:53.343663  215661 system_pods.go:116] waiting for k8s-apps to be running ...
	I1029 09:38:53.347434  215661 system_pods.go:86] 8 kube-system pods found
	I1029 09:38:53.347513  215661 system_pods.go:89] "coredns-66bc5c9577-hbn59" [571dd534-5c05-4ea1-b2fa-292f307b4037] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:38:53.347537  215661 system_pods.go:89] "etcd-default-k8s-diff-port-154565" [53c9dae2-fca7-4051-b461-90cb4406bce2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:38:53.347577  215661 system_pods.go:89] "kindnet-btswn" [a7737b1f-9d42-4a7d-8bd7-84911d52c5f9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1029 09:38:53.347606  215661 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-154565" [2272867f-fac7-443c-9471-ca7f7627c890] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:38:53.347634  215661 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-154565" [2430e944-a50b-4c78-8361-998a66b1a633] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:38:53.347663  215661 system_pods.go:89] "kube-proxy-vxlb9" [46793add-1a42-48cd-835c-69d4f9a1bf7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1029 09:38:53.347700  215661 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-154565" [66c1d519-8710-43bd-b90d-5bc17357ddd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:38:53.347721  215661 system_pods.go:89] "storage-provisioner" [3716ce63-bbfd-489a-a382-9c6d5dc40925] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:38:53.347744  215661 system_pods.go:126] duration metric: took 4.062691ms to wait for k8s-apps to be running ...
	I1029 09:38:53.347778  215661 system_svc.go:44] waiting for kubelet service to be running ....
	I1029 09:38:53.347851  215661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:38:53.368274  215661 system_svc.go:56] duration metric: took 20.488687ms WaitForService to wait for kubelet
	I1029 09:38:53.368359  215661 kubeadm.go:587] duration metric: took 9.106220503s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:38:53.368395  215661 node_conditions.go:102] verifying NodePressure condition ...
	I1029 09:38:53.377200  215661 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1029 09:38:53.377281  215661 node_conditions.go:123] node cpu capacity is 2
	I1029 09:38:53.377308  215661 node_conditions.go:105] duration metric: took 8.891412ms to run NodePressure ...
	I1029 09:38:53.377357  215661 start.go:242] waiting for startup goroutines ...
	I1029 09:38:53.377385  215661 start.go:247] waiting for cluster config update ...
	I1029 09:38:53.377412  215661 start.go:256] writing updated cluster config ...
	I1029 09:38:53.377745  215661 ssh_runner.go:195] Run: rm -f paused
	I1029 09:38:53.382286  215661 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:38:53.387677  215661 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hbn59" in "kube-system" namespace to be "Ready" or be gone ...
	W1029 09:38:55.393783  215661 pod_ready.go:104] pod "coredns-66bc5c9577-hbn59" is not "Ready", error: <nil>
	W1029 09:38:57.394997  215661 pod_ready.go:104] pod "coredns-66bc5c9577-hbn59" is not "Ready", error: <nil>
	I1029 09:38:56.517455  213005 addons.go:515] duration metric: took 1.532819718s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1029 09:38:56.672821  213005 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-937200" context rescaled to 1 replicas
	W1029 09:38:57.551393  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	W1029 09:38:59.551639  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	W1029 09:38:59.892866  215661 pod_ready.go:104] pod "coredns-66bc5c9577-hbn59" is not "Ready", error: <nil>
	W1029 09:39:01.894823  215661 pod_ready.go:104] pod "coredns-66bc5c9577-hbn59" is not "Ready", error: <nil>
	W1029 09:39:02.051377  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	W1029 09:39:04.549856  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	W1029 09:39:04.400171  215661 pod_ready.go:104] pod "coredns-66bc5c9577-hbn59" is not "Ready", error: <nil>
	W1029 09:39:06.893682  215661 pod_ready.go:104] pod "coredns-66bc5c9577-hbn59" is not "Ready", error: <nil>
	W1029 09:39:06.550621  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	W1029 09:39:09.050839  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	W1029 09:39:09.394081  215661 pod_ready.go:104] pod "coredns-66bc5c9577-hbn59" is not "Ready", error: <nil>
	W1029 09:39:11.398469  215661 pod_ready.go:104] pod "coredns-66bc5c9577-hbn59" is not "Ready", error: <nil>
	W1029 09:39:11.550856  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	W1029 09:39:14.050822  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	W1029 09:39:13.892978  215661 pod_ready.go:104] pod "coredns-66bc5c9577-hbn59" is not "Ready", error: <nil>
	W1029 09:39:16.395595  215661 pod_ready.go:104] pod "coredns-66bc5c9577-hbn59" is not "Ready", error: <nil>
	W1029 09:39:16.550000  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	W1029 09:39:18.550825  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	W1029 09:39:18.396802  215661 pod_ready.go:104] pod "coredns-66bc5c9577-hbn59" is not "Ready", error: <nil>
	W1029 09:39:20.397735  215661 pod_ready.go:104] pod "coredns-66bc5c9577-hbn59" is not "Ready", error: <nil>
	W1029 09:39:22.895975  215661 pod_ready.go:104] pod "coredns-66bc5c9577-hbn59" is not "Ready", error: <nil>
	W1029 09:39:21.050787  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	W1029 09:39:23.050941  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	W1029 09:39:25.051011  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	W1029 09:39:25.396471  215661 pod_ready.go:104] pod "coredns-66bc5c9577-hbn59" is not "Ready", error: <nil>
	W1029 09:39:27.397886  215661 pod_ready.go:104] pod "coredns-66bc5c9577-hbn59" is not "Ready", error: <nil>
	I1029 09:39:28.893737  215661 pod_ready.go:94] pod "coredns-66bc5c9577-hbn59" is "Ready"
	I1029 09:39:28.893773  215661 pod_ready.go:86] duration metric: took 35.506030291s for pod "coredns-66bc5c9577-hbn59" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:28.896935  215661 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-154565" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:28.901644  215661 pod_ready.go:94] pod "etcd-default-k8s-diff-port-154565" is "Ready"
	I1029 09:39:28.901671  215661 pod_ready.go:86] duration metric: took 4.709786ms for pod "etcd-default-k8s-diff-port-154565" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:28.903798  215661 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-154565" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:28.908247  215661 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-154565" is "Ready"
	I1029 09:39:28.908354  215661 pod_ready.go:86] duration metric: took 4.530265ms for pod "kube-apiserver-default-k8s-diff-port-154565" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:28.910831  215661 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-154565" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:29.092255  215661 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-154565" is "Ready"
	I1029 09:39:29.092287  215661 pod_ready.go:86] duration metric: took 181.432296ms for pod "kube-controller-manager-default-k8s-diff-port-154565" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:29.292057  215661 pod_ready.go:83] waiting for pod "kube-proxy-vxlb9" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:29.692669  215661 pod_ready.go:94] pod "kube-proxy-vxlb9" is "Ready"
	I1029 09:39:29.692697  215661 pod_ready.go:86] duration metric: took 400.610431ms for pod "kube-proxy-vxlb9" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:29.891729  215661 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-154565" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:30.292263  215661 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-154565" is "Ready"
	I1029 09:39:30.292294  215661 pod_ready.go:86] duration metric: took 400.536699ms for pod "kube-scheduler-default-k8s-diff-port-154565" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:30.292334  215661 pod_ready.go:40] duration metric: took 36.90995243s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:39:30.349415  215661 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	W1029 09:39:27.550020  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	W1029 09:39:29.550269  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	I1029 09:39:30.437358  215661 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-154565" cluster and "default" namespace by default
	W1029 09:39:31.550884  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	W1029 09:39:34.050911  213005 node_ready.go:57] node "auto-937200" has "Ready":"False" status (will retry)
	I1029 09:39:36.051725  213005 node_ready.go:49] node "auto-937200" is "Ready"
	I1029 09:39:36.051751  213005 node_ready.go:38] duration metric: took 40.504497163s for node "auto-937200" to be "Ready" ...
	I1029 09:39:36.051766  213005 api_server.go:52] waiting for apiserver process to appear ...
	I1029 09:39:36.051827  213005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:39:36.065292  213005 api_server.go:72] duration metric: took 41.081123074s to wait for apiserver process to appear ...
	I1029 09:39:36.065312  213005 api_server.go:88] waiting for apiserver healthz status ...
	I1029 09:39:36.065333  213005 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:39:36.077806  213005 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1029 09:39:36.079054  213005 api_server.go:141] control plane version: v1.34.1
	I1029 09:39:36.079076  213005 api_server.go:131] duration metric: took 13.756138ms to wait for apiserver health ...
	I1029 09:39:36.079085  213005 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 09:39:36.089027  213005 system_pods.go:59] 8 kube-system pods found
	I1029 09:39:36.089056  213005 system_pods.go:61] "coredns-66bc5c9577-tgrw8" [73ce956b-c6ca-426a-825e-51fe3f119917] Pending
	I1029 09:39:36.089062  213005 system_pods.go:61] "etcd-auto-937200" [5dc08754-f4f8-4cfb-8daa-0d39d7ebf2af] Running
	I1029 09:39:36.089067  213005 system_pods.go:61] "kindnet-qqhf5" [baa553de-c84a-4cb4-b629-b021eb75966c] Running
	I1029 09:39:36.089071  213005 system_pods.go:61] "kube-apiserver-auto-937200" [e6297baa-bb55-41f6-8578-b5f59566062c] Running
	I1029 09:39:36.089076  213005 system_pods.go:61] "kube-controller-manager-auto-937200" [05a9a22c-272a-4597-95af-b79a3cad70a1] Running
	I1029 09:39:36.089080  213005 system_pods.go:61] "kube-proxy-dmr48" [e634703f-3441-49c6-9f33-7fd37262f5a4] Running
	I1029 09:39:36.089084  213005 system_pods.go:61] "kube-scheduler-auto-937200" [82b3f3f0-2609-40fd-a95b-09c198a4555e] Running
	I1029 09:39:36.089089  213005 system_pods.go:61] "storage-provisioner" [d3e26104-f613-42a2-accf-2a549b3a8983] Pending
	I1029 09:39:36.089094  213005 system_pods.go:74] duration metric: took 10.004153ms to wait for pod list to return data ...
	I1029 09:39:36.089102  213005 default_sa.go:34] waiting for default service account to be created ...
	I1029 09:39:36.096826  213005 default_sa.go:45] found service account: "default"
	I1029 09:39:36.096849  213005 default_sa.go:55] duration metric: took 7.741182ms for default service account to be created ...
	I1029 09:39:36.096858  213005 system_pods.go:116] waiting for k8s-apps to be running ...
	I1029 09:39:36.111888  213005 system_pods.go:86] 8 kube-system pods found
	I1029 09:39:36.111924  213005 system_pods.go:89] "coredns-66bc5c9577-tgrw8" [73ce956b-c6ca-426a-825e-51fe3f119917] Pending
	I1029 09:39:36.111931  213005 system_pods.go:89] "etcd-auto-937200" [5dc08754-f4f8-4cfb-8daa-0d39d7ebf2af] Running
	I1029 09:39:36.111936  213005 system_pods.go:89] "kindnet-qqhf5" [baa553de-c84a-4cb4-b629-b021eb75966c] Running
	I1029 09:39:36.111941  213005 system_pods.go:89] "kube-apiserver-auto-937200" [e6297baa-bb55-41f6-8578-b5f59566062c] Running
	I1029 09:39:36.111945  213005 system_pods.go:89] "kube-controller-manager-auto-937200" [05a9a22c-272a-4597-95af-b79a3cad70a1] Running
	I1029 09:39:36.111949  213005 system_pods.go:89] "kube-proxy-dmr48" [e634703f-3441-49c6-9f33-7fd37262f5a4] Running
	I1029 09:39:36.111953  213005 system_pods.go:89] "kube-scheduler-auto-937200" [82b3f3f0-2609-40fd-a95b-09c198a4555e] Running
	I1029 09:39:36.111964  213005 system_pods.go:89] "storage-provisioner" [d3e26104-f613-42a2-accf-2a549b3a8983] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:39:36.112001  213005 retry.go:31] will retry after 230.447185ms: missing components: kube-dns
	I1029 09:39:36.346557  213005 system_pods.go:86] 8 kube-system pods found
	I1029 09:39:36.346597  213005 system_pods.go:89] "coredns-66bc5c9577-tgrw8" [73ce956b-c6ca-426a-825e-51fe3f119917] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:39:36.346605  213005 system_pods.go:89] "etcd-auto-937200" [5dc08754-f4f8-4cfb-8daa-0d39d7ebf2af] Running
	I1029 09:39:36.346611  213005 system_pods.go:89] "kindnet-qqhf5" [baa553de-c84a-4cb4-b629-b021eb75966c] Running
	I1029 09:39:36.346616  213005 system_pods.go:89] "kube-apiserver-auto-937200" [e6297baa-bb55-41f6-8578-b5f59566062c] Running
	I1029 09:39:36.346620  213005 system_pods.go:89] "kube-controller-manager-auto-937200" [05a9a22c-272a-4597-95af-b79a3cad70a1] Running
	I1029 09:39:36.346626  213005 system_pods.go:89] "kube-proxy-dmr48" [e634703f-3441-49c6-9f33-7fd37262f5a4] Running
	I1029 09:39:36.346630  213005 system_pods.go:89] "kube-scheduler-auto-937200" [82b3f3f0-2609-40fd-a95b-09c198a4555e] Running
	I1029 09:39:36.346654  213005 system_pods.go:89] "storage-provisioner" [d3e26104-f613-42a2-accf-2a549b3a8983] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:39:36.346677  213005 retry.go:31] will retry after 296.152942ms: missing components: kube-dns
	I1029 09:39:36.647666  213005 system_pods.go:86] 8 kube-system pods found
	I1029 09:39:36.647703  213005 system_pods.go:89] "coredns-66bc5c9577-tgrw8" [73ce956b-c6ca-426a-825e-51fe3f119917] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:39:36.647710  213005 system_pods.go:89] "etcd-auto-937200" [5dc08754-f4f8-4cfb-8daa-0d39d7ebf2af] Running
	I1029 09:39:36.647717  213005 system_pods.go:89] "kindnet-qqhf5" [baa553de-c84a-4cb4-b629-b021eb75966c] Running
	I1029 09:39:36.647722  213005 system_pods.go:89] "kube-apiserver-auto-937200" [e6297baa-bb55-41f6-8578-b5f59566062c] Running
	I1029 09:39:36.647727  213005 system_pods.go:89] "kube-controller-manager-auto-937200" [05a9a22c-272a-4597-95af-b79a3cad70a1] Running
	I1029 09:39:36.647733  213005 system_pods.go:89] "kube-proxy-dmr48" [e634703f-3441-49c6-9f33-7fd37262f5a4] Running
	I1029 09:39:36.647737  213005 system_pods.go:89] "kube-scheduler-auto-937200" [82b3f3f0-2609-40fd-a95b-09c198a4555e] Running
	I1029 09:39:36.647743  213005 system_pods.go:89] "storage-provisioner" [d3e26104-f613-42a2-accf-2a549b3a8983] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:39:36.647767  213005 retry.go:31] will retry after 303.946243ms: missing components: kube-dns
	I1029 09:39:36.955091  213005 system_pods.go:86] 8 kube-system pods found
	I1029 09:39:36.955140  213005 system_pods.go:89] "coredns-66bc5c9577-tgrw8" [73ce956b-c6ca-426a-825e-51fe3f119917] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:39:36.955147  213005 system_pods.go:89] "etcd-auto-937200" [5dc08754-f4f8-4cfb-8daa-0d39d7ebf2af] Running
	I1029 09:39:36.955153  213005 system_pods.go:89] "kindnet-qqhf5" [baa553de-c84a-4cb4-b629-b021eb75966c] Running
	I1029 09:39:36.955157  213005 system_pods.go:89] "kube-apiserver-auto-937200" [e6297baa-bb55-41f6-8578-b5f59566062c] Running
	I1029 09:39:36.955162  213005 system_pods.go:89] "kube-controller-manager-auto-937200" [05a9a22c-272a-4597-95af-b79a3cad70a1] Running
	I1029 09:39:36.955166  213005 system_pods.go:89] "kube-proxy-dmr48" [e634703f-3441-49c6-9f33-7fd37262f5a4] Running
	I1029 09:39:36.955170  213005 system_pods.go:89] "kube-scheduler-auto-937200" [82b3f3f0-2609-40fd-a95b-09c198a4555e] Running
	I1029 09:39:36.955176  213005 system_pods.go:89] "storage-provisioner" [d3e26104-f613-42a2-accf-2a549b3a8983] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:39:36.955189  213005 retry.go:31] will retry after 517.141809ms: missing components: kube-dns
	I1029 09:39:37.483240  213005 system_pods.go:86] 8 kube-system pods found
	I1029 09:39:37.483272  213005 system_pods.go:89] "coredns-66bc5c9577-tgrw8" [73ce956b-c6ca-426a-825e-51fe3f119917] Running
	I1029 09:39:37.483280  213005 system_pods.go:89] "etcd-auto-937200" [5dc08754-f4f8-4cfb-8daa-0d39d7ebf2af] Running
	I1029 09:39:37.483286  213005 system_pods.go:89] "kindnet-qqhf5" [baa553de-c84a-4cb4-b629-b021eb75966c] Running
	I1029 09:39:37.483291  213005 system_pods.go:89] "kube-apiserver-auto-937200" [e6297baa-bb55-41f6-8578-b5f59566062c] Running
	I1029 09:39:37.483295  213005 system_pods.go:89] "kube-controller-manager-auto-937200" [05a9a22c-272a-4597-95af-b79a3cad70a1] Running
	I1029 09:39:37.483299  213005 system_pods.go:89] "kube-proxy-dmr48" [e634703f-3441-49c6-9f33-7fd37262f5a4] Running
	I1029 09:39:37.483304  213005 system_pods.go:89] "kube-scheduler-auto-937200" [82b3f3f0-2609-40fd-a95b-09c198a4555e] Running
	I1029 09:39:37.483308  213005 system_pods.go:89] "storage-provisioner" [d3e26104-f613-42a2-accf-2a549b3a8983] Running
	I1029 09:39:37.483315  213005 system_pods.go:126] duration metric: took 1.386452299s to wait for k8s-apps to be running ...
	I1029 09:39:37.483328  213005 system_svc.go:44] waiting for kubelet service to be running ....
	I1029 09:39:37.483394  213005 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:39:37.502234  213005 system_svc.go:56] duration metric: took 18.894852ms WaitForService to wait for kubelet
	I1029 09:39:37.502260  213005 kubeadm.go:587] duration metric: took 42.518096204s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:39:37.502280  213005 node_conditions.go:102] verifying NodePressure condition ...
	I1029 09:39:37.505971  213005 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1029 09:39:37.506008  213005 node_conditions.go:123] node cpu capacity is 2
	I1029 09:39:37.506023  213005 node_conditions.go:105] duration metric: took 3.683882ms to run NodePressure ...
	I1029 09:39:37.506036  213005 start.go:242] waiting for startup goroutines ...
	I1029 09:39:37.506044  213005 start.go:247] waiting for cluster config update ...
	I1029 09:39:37.506055  213005 start.go:256] writing updated cluster config ...
	I1029 09:39:37.506354  213005 ssh_runner.go:195] Run: rm -f paused
	I1029 09:39:37.510187  213005 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:39:37.581359  213005 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tgrw8" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:37.586202  213005 pod_ready.go:94] pod "coredns-66bc5c9577-tgrw8" is "Ready"
	I1029 09:39:37.586230  213005 pod_ready.go:86] duration metric: took 4.84203ms for pod "coredns-66bc5c9577-tgrw8" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:37.588913  213005 pod_ready.go:83] waiting for pod "etcd-auto-937200" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:37.593382  213005 pod_ready.go:94] pod "etcd-auto-937200" is "Ready"
	I1029 09:39:37.593452  213005 pod_ready.go:86] duration metric: took 4.509367ms for pod "etcd-auto-937200" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:37.595574  213005 pod_ready.go:83] waiting for pod "kube-apiserver-auto-937200" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:37.600067  213005 pod_ready.go:94] pod "kube-apiserver-auto-937200" is "Ready"
	I1029 09:39:37.600093  213005 pod_ready.go:86] duration metric: took 4.49614ms for pod "kube-apiserver-auto-937200" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:37.602471  213005 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-937200" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:37.914842  213005 pod_ready.go:94] pod "kube-controller-manager-auto-937200" is "Ready"
	I1029 09:39:37.914878  213005 pod_ready.go:86] duration metric: took 312.383814ms for pod "kube-controller-manager-auto-937200" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:38.116065  213005 pod_ready.go:83] waiting for pod "kube-proxy-dmr48" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:38.514515  213005 pod_ready.go:94] pod "kube-proxy-dmr48" is "Ready"
	I1029 09:39:38.514552  213005 pod_ready.go:86] duration metric: took 398.457037ms for pod "kube-proxy-dmr48" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:38.715280  213005 pod_ready.go:83] waiting for pod "kube-scheduler-auto-937200" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:39.114824  213005 pod_ready.go:94] pod "kube-scheduler-auto-937200" is "Ready"
	I1029 09:39:39.114855  213005 pod_ready.go:86] duration metric: took 399.550471ms for pod "kube-scheduler-auto-937200" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:39:39.114868  213005 pod_ready.go:40] duration metric: took 1.604647504s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:39:39.170828  213005 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1029 09:39:39.177051  213005 out.go:179] * Done! kubectl is now configured to use "auto-937200" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 29 09:39:33 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:33.152479308Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:39:33 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:33.156035681Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:39:33 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:33.156071053Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:39:33 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:33.15609402Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:39:33 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:33.159927187Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:39:33 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:33.159962576Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:39:33 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:33.159985247Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:39:33 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:33.165190054Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:39:33 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:33.165224614Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:39:33 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:33.165246038Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 29 09:39:33 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:33.179678408Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:39:33 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:33.179735688Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:39:39 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:39.280940433Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=fe891e8a-d6d0-4a3c-8071-5d6c7fb46252 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:39:39 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:39.282750075Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f52dd674-424b-4c6a-b561-60559afe0856 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:39:39 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:39.283961187Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sgx54/dashboard-metrics-scraper" id=556613ea-00c6-4431-a33f-f3e73978920e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:39:39 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:39.284079506Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:39:39 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:39.297464917Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:39:39 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:39.298298237Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:39:39 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:39.326606484Z" level=info msg="Created container def5f21481b3d0e59948f1372921bdc8212525290f254d357bd5e810b72206c5: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sgx54/dashboard-metrics-scraper" id=556613ea-00c6-4431-a33f-f3e73978920e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:39:39 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:39.331005203Z" level=info msg="Starting container: def5f21481b3d0e59948f1372921bdc8212525290f254d357bd5e810b72206c5" id=331af54f-576f-4b6b-8bac-7a0e11862ada name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:39:39 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:39.334556169Z" level=info msg="Started container" PID=1712 containerID=def5f21481b3d0e59948f1372921bdc8212525290f254d357bd5e810b72206c5 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sgx54/dashboard-metrics-scraper id=331af54f-576f-4b6b-8bac-7a0e11862ada name=/runtime.v1.RuntimeService/StartContainer sandboxID=b1e12f72104fa4e2ddfd586361e65c909e9ff57dd60886ef37199bc6e178a1b1
	Oct 29 09:39:39 default-k8s-diff-port-154565 conmon[1710]: conmon def5f21481b3d0e59948 <ninfo>: container 1712 exited with status 1
	Oct 29 09:39:39 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:39.714012557Z" level=info msg="Removing container: 7e15281b78b57f2b7921b6796a2095f5d75f11b6efcfdfee0a6b021ac26008fa" id=fc263607-7547-43ed-b0f1-282d5e4b0a87 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:39:39 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:39.739493094Z" level=info msg="Error loading conmon cgroup of container 7e15281b78b57f2b7921b6796a2095f5d75f11b6efcfdfee0a6b021ac26008fa: cgroup deleted" id=fc263607-7547-43ed-b0f1-282d5e4b0a87 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:39:39 default-k8s-diff-port-154565 crio[645]: time="2025-10-29T09:39:39.746384964Z" level=info msg="Removed container 7e15281b78b57f2b7921b6796a2095f5d75f11b6efcfdfee0a6b021ac26008fa: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sgx54/dashboard-metrics-scraper" id=fc263607-7547-43ed-b0f1-282d5e4b0a87 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	def5f21481b3d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago        Exited              dashboard-metrics-scraper   3                   b1e12f72104fa       dashboard-metrics-scraper-6ffb444bf9-sgx54             kubernetes-dashboard
	0c86ad951f717       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   b345e804a673d       storage-provisioner                                    kube-system
	47c8964204a91       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   42 seconds ago       Running             kubernetes-dashboard        0                   10acd8436c798       kubernetes-dashboard-855c9754f9-zcdsw                  kubernetes-dashboard
	c46b79795aaad       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           55 seconds ago       Running             coredns                     1                   5e0beb6b56f95       coredns-66bc5c9577-hbn59                               kube-system
	996dd46a13bd9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   b93e9c1942bd9       kindnet-btswn                                          kube-system
	40419a34f22d4       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           55 seconds ago       Running             kube-proxy                  1                   795f801432d51       kube-proxy-vxlb9                                       kube-system
	a78c14571ca50       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   ed8e58fc80bf8       busybox                                                default
	76deef5dfbe89       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   b345e804a673d       storage-provisioner                                    kube-system
	4ecc87c3c4efe       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   4b0955f33583f       kube-apiserver-default-k8s-diff-port-154565            kube-system
	fac10df47d1f3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   5e443d3878e11       kube-scheduler-default-k8s-diff-port-154565            kube-system
	921026fa87ee2       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   16d3b21bf28e3       kube-controller-manager-default-k8s-diff-port-154565   kube-system
	2735bfa1503d0       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   2affffde098df       etcd-default-k8s-diff-port-154565                      kube-system
	
	
	==> coredns [c46b79795aaad08becba49a7b200667b944eb335b0b342474d42e8439a790a5d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52688 - 1155 "HINFO IN 5430665766845367308.6762811313159703522. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021407907s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-154565
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-154565
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=default-k8s-diff-port-154565
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_37_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:37:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-154565
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:39:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:39:22 +0000   Wed, 29 Oct 2025 09:37:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:39:22 +0000   Wed, 29 Oct 2025 09:37:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:39:22 +0000   Wed, 29 Oct 2025 09:37:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 09:39:22 +0000   Wed, 29 Oct 2025 09:38:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-154565
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                78efc080-8619-433f-9174-c9ba8af774f1
	  Boot ID:                    dcb5a4aa-84b0-4edf-aeb7-a96ccf6c0882
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-66bc5c9577-hbn59                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m24s
	  kube-system                 etcd-default-k8s-diff-port-154565                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m30s
	  kube-system                 kindnet-btswn                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m24s
	  kube-system                 kube-apiserver-default-k8s-diff-port-154565             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-154565    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-proxy-vxlb9                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-scheduler-default-k8s-diff-port-154565             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-sgx54              0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-zcdsw                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m23s              kube-proxy       
	  Normal   Starting                 54s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m30s              kubelet          Node default-k8s-diff-port-154565 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m30s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m30s              kubelet          Node default-k8s-diff-port-154565 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m30s              kubelet          Node default-k8s-diff-port-154565 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m30s              kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m25s              node-controller  Node default-k8s-diff-port-154565 event: Registered Node default-k8s-diff-port-154565 in Controller
	  Normal   NodeReady                103s               kubelet          Node default-k8s-diff-port-154565 status is now: NodeReady
	  Normal   Starting                 64s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  64s (x8 over 64s)  kubelet          Node default-k8s-diff-port-154565 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s (x8 over 64s)  kubelet          Node default-k8s-diff-port-154565 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     64s (x8 over 64s)  kubelet          Node default-k8s-diff-port-154565 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                node-controller  Node default-k8s-diff-port-154565 event: Registered Node default-k8s-diff-port-154565 in Controller
	
	
	==> dmesg <==
	[ +18.424492] overlayfs: idmapped layers are currently not supported
	[  +4.342269] hrtimer: interrupt took 2289025 ns
	[Oct29 09:12] overlayfs: idmapped layers are currently not supported
	[Oct29 09:13] overlayfs: idmapped layers are currently not supported
	[Oct29 09:14] overlayfs: idmapped layers are currently not supported
	[Oct29 09:20] overlayfs: idmapped layers are currently not supported
	[Oct29 09:23] overlayfs: idmapped layers are currently not supported
	[Oct29 09:24] overlayfs: idmapped layers are currently not supported
	[ +30.917844] overlayfs: idmapped layers are currently not supported
	[Oct29 09:27] overlayfs: idmapped layers are currently not supported
	[Oct29 09:29] overlayfs: idmapped layers are currently not supported
	[Oct29 09:30] overlayfs: idmapped layers are currently not supported
	[  +5.608805] overlayfs: idmapped layers are currently not supported
	[ +37.422429] overlayfs: idmapped layers are currently not supported
	[Oct29 09:31] overlayfs: idmapped layers are currently not supported
	[Oct29 09:32] overlayfs: idmapped layers are currently not supported
	[Oct29 09:34] overlayfs: idmapped layers are currently not supported
	[ +22.728709] overlayfs: idmapped layers are currently not supported
	[Oct29 09:35] overlayfs: idmapped layers are currently not supported
	[ +21.902387] overlayfs: idmapped layers are currently not supported
	[Oct29 09:37] overlayfs: idmapped layers are currently not supported
	[ +19.842209] overlayfs: idmapped layers are currently not supported
	[ +25.062735] overlayfs: idmapped layers are currently not supported
	[Oct29 09:38] overlayfs: idmapped layers are currently not supported
	[  +5.356953] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [2735bfa1503d05a45f458d45439f5d361379ddf5a1c72b94147b431a43b261c5] <==
	{"level":"warn","ts":"2025-10-29T09:38:48.890172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:48.994710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.032646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.082761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.201122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.236210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.356365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.368494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.418564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.476580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.512406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.566212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.608548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.645611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.722052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.779186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.832928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.906355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.929821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.954463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:49.988727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:50.046049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:50.065039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:50.097035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:38:50.244436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47768","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:39:48 up  1:22,  0 user,  load average: 2.92, 3.66, 3.03
	Linux default-k8s-diff-port-154565 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [996dd46a13bd9c4fbc716e270a5ee2bfd1b8ca9b3678e68b888aa222415a9866] <==
	I1029 09:38:52.861067       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:38:52.861639       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1029 09:38:52.861837       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:38:52.861890       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:38:52.861926       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:38:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:38:53.146736       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:38:53.146754       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:38:53.146761       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:38:53.146886       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1029 09:39:23.146842       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1029 09:39:23.146914       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1029 09:39:23.147080       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1029 09:39:23.147708       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1029 09:39:24.746909       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 09:39:24.746942       1 metrics.go:72] Registering metrics
	I1029 09:39:24.747012       1 controller.go:711] "Syncing nftables rules"
	I1029 09:39:33.148403       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1029 09:39:33.148468       1 main.go:301] handling current node
	I1029 09:39:43.148652       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1029 09:39:43.148698       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4ecc87c3c4efebb87e8579fe30d41b373305c1560267c5e5c1c7e4f651d75911] <==
	I1029 09:38:51.491102       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1029 09:38:51.491268       1 aggregator.go:171] initial CRD sync complete...
	I1029 09:38:51.491285       1 autoregister_controller.go:144] Starting autoregister controller
	I1029 09:38:51.491312       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1029 09:38:51.491319       1 cache.go:39] Caches are synced for autoregister controller
	I1029 09:38:51.539058       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1029 09:38:51.541687       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1029 09:38:51.562331       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1029 09:38:51.562357       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1029 09:38:51.574645       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1029 09:38:51.591842       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1029 09:38:51.591878       1 policy_source.go:240] refreshing policies
	I1029 09:38:51.598890       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1029 09:38:51.613104       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:38:52.205889       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 09:38:52.232771       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:38:52.699977       1 controller.go:667] quota admission added evaluator for: namespaces
	I1029 09:38:52.946351       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1029 09:38:53.072252       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:38:53.111433       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 09:38:53.248666       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.16.56"}
	I1029 09:38:53.304004       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.83.233"}
	I1029 09:38:54.934032       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1029 09:38:55.323157       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1029 09:38:55.372214       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [921026fa87ee220227613d52ff56bc6b3408a4d844d6176f9493e6f447ed8e33] <==
	I1029 09:38:54.917146       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1029 09:38:54.917203       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1029 09:38:54.917281       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:38:54.917310       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1029 09:38:54.917341       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1029 09:38:54.918545       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1029 09:38:54.927625       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1029 09:38:54.934182       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1029 09:38:54.937737       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1029 09:38:54.938344       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:38:54.941697       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1029 09:38:54.950494       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1029 09:38:54.957705       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1029 09:38:54.967811       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1029 09:38:54.967959       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1029 09:38:54.967978       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1029 09:38:54.967986       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1029 09:38:54.967999       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1029 09:38:54.968007       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1029 09:38:54.970155       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:38:54.970924       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1029 09:38:54.970940       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1029 09:38:54.990000       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1029 09:38:54.996414       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:38:55.000478       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [40419a34f22d499b5e10f2817ca3190043cf4654975faa221907811657572319] <==
	I1029 09:38:53.018433       1 server_linux.go:53] "Using iptables proxy"
	I1029 09:38:53.383110       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 09:38:53.487748       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 09:38:53.487788       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1029 09:38:53.487878       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 09:38:53.556645       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:38:53.556709       1 server_linux.go:132] "Using iptables Proxier"
	I1029 09:38:53.561357       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 09:38:53.561675       1 server.go:527] "Version info" version="v1.34.1"
	I1029 09:38:53.561714       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:38:53.576180       1 config.go:106] "Starting endpoint slice config controller"
	I1029 09:38:53.576215       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 09:38:53.582512       1 config.go:200] "Starting service config controller"
	I1029 09:38:53.582594       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 09:38:53.582908       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 09:38:53.582949       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 09:38:53.603037       1 config.go:309] "Starting node config controller"
	I1029 09:38:53.608409       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 09:38:53.608517       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 09:38:53.677278       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1029 09:38:53.683658       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 09:38:53.683753       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [fac10df47d1f3807c7e226078bc5907e12ab5e525c2712d52627272075aad944] <==
	I1029 09:38:48.810770       1 serving.go:386] Generated self-signed cert in-memory
	W1029 09:38:51.356557       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1029 09:38:51.356660       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1029 09:38:51.356694       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1029 09:38:51.356724       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1029 09:38:51.505912       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1029 09:38:51.505946       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:38:51.535446       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1029 09:38:51.535575       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:38:51.535595       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:38:51.535616       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1029 09:38:51.635673       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 09:38:56 default-k8s-diff-port-154565 kubelet[770]: W1029 09:38:56.068918     770 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/dfc2c419fe4814caa3f5a0bc7a2ac4b24be2798cd8aa60ffeba23c8cc25c3683/crio-10acd8436c7984e73ec091b4fb8d2c7ada9c89275fdf6d1472b521b17a94f5f9 WatchSource:0}: Error finding container 10acd8436c7984e73ec091b4fb8d2c7ada9c89275fdf6d1472b521b17a94f5f9: Status 404 returned error can't find the container with id 10acd8436c7984e73ec091b4fb8d2c7ada9c89275fdf6d1472b521b17a94f5f9
	Oct 29 09:38:58 default-k8s-diff-port-154565 kubelet[770]: I1029 09:38:58.588527     770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 29 09:39:01 default-k8s-diff-port-154565 kubelet[770]: I1029 09:39:01.598669     770 scope.go:117] "RemoveContainer" containerID="e91fb5813d5aa100fa5522a4e147779a083cac8ced044593db14f370d56dc385"
	Oct 29 09:39:02 default-k8s-diff-port-154565 kubelet[770]: I1029 09:39:02.603191     770 scope.go:117] "RemoveContainer" containerID="e91fb5813d5aa100fa5522a4e147779a083cac8ced044593db14f370d56dc385"
	Oct 29 09:39:02 default-k8s-diff-port-154565 kubelet[770]: I1029 09:39:02.609204     770 scope.go:117] "RemoveContainer" containerID="3fd2ca35b7976e2d5cde1d20250d63cf8a3843e39fe76e9b2a90f0cea935c5ce"
	Oct 29 09:39:02 default-k8s-diff-port-154565 kubelet[770]: E1029 09:39:02.609428     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sgx54_kubernetes-dashboard(f838510f-cd85-4a23-be9a-9b0b86aee2e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sgx54" podUID="f838510f-cd85-4a23-be9a-9b0b86aee2e3"
	Oct 29 09:39:03 default-k8s-diff-port-154565 kubelet[770]: I1029 09:39:03.606972     770 scope.go:117] "RemoveContainer" containerID="3fd2ca35b7976e2d5cde1d20250d63cf8a3843e39fe76e9b2a90f0cea935c5ce"
	Oct 29 09:39:03 default-k8s-diff-port-154565 kubelet[770]: E1029 09:39:03.607181     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sgx54_kubernetes-dashboard(f838510f-cd85-4a23-be9a-9b0b86aee2e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sgx54" podUID="f838510f-cd85-4a23-be9a-9b0b86aee2e3"
	Oct 29 09:39:05 default-k8s-diff-port-154565 kubelet[770]: I1029 09:39:05.949573     770 scope.go:117] "RemoveContainer" containerID="3fd2ca35b7976e2d5cde1d20250d63cf8a3843e39fe76e9b2a90f0cea935c5ce"
	Oct 29 09:39:05 default-k8s-diff-port-154565 kubelet[770]: E1029 09:39:05.950186     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sgx54_kubernetes-dashboard(f838510f-cd85-4a23-be9a-9b0b86aee2e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sgx54" podUID="f838510f-cd85-4a23-be9a-9b0b86aee2e3"
	Oct 29 09:39:16 default-k8s-diff-port-154565 kubelet[770]: I1029 09:39:16.279284     770 scope.go:117] "RemoveContainer" containerID="3fd2ca35b7976e2d5cde1d20250d63cf8a3843e39fe76e9b2a90f0cea935c5ce"
	Oct 29 09:39:16 default-k8s-diff-port-154565 kubelet[770]: I1029 09:39:16.643476     770 scope.go:117] "RemoveContainer" containerID="3fd2ca35b7976e2d5cde1d20250d63cf8a3843e39fe76e9b2a90f0cea935c5ce"
	Oct 29 09:39:16 default-k8s-diff-port-154565 kubelet[770]: I1029 09:39:16.643676     770 scope.go:117] "RemoveContainer" containerID="7e15281b78b57f2b7921b6796a2095f5d75f11b6efcfdfee0a6b021ac26008fa"
	Oct 29 09:39:16 default-k8s-diff-port-154565 kubelet[770]: E1029 09:39:16.643828     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sgx54_kubernetes-dashboard(f838510f-cd85-4a23-be9a-9b0b86aee2e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sgx54" podUID="f838510f-cd85-4a23-be9a-9b0b86aee2e3"
	Oct 29 09:39:16 default-k8s-diff-port-154565 kubelet[770]: I1029 09:39:16.665979     770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zcdsw" podStartSLOduration=12.226904052 podStartE2EDuration="21.665961076s" podCreationTimestamp="2025-10-29 09:38:55 +0000 UTC" firstStartedPulling="2025-10-29 09:38:56.095036277 +0000 UTC m=+13.287465013" lastFinishedPulling="2025-10-29 09:39:05.534093301 +0000 UTC m=+22.726522037" observedRunningTime="2025-10-29 09:39:05.628157567 +0000 UTC m=+22.820586327" watchObservedRunningTime="2025-10-29 09:39:16.665961076 +0000 UTC m=+33.858389812"
	Oct 29 09:39:23 default-k8s-diff-port-154565 kubelet[770]: I1029 09:39:23.663188     770 scope.go:117] "RemoveContainer" containerID="76deef5dfbe8964470407b18cf7e6c413662b0b3a9ea20f0b1ebd6bb5b990471"
	Oct 29 09:39:25 default-k8s-diff-port-154565 kubelet[770]: I1029 09:39:25.949137     770 scope.go:117] "RemoveContainer" containerID="7e15281b78b57f2b7921b6796a2095f5d75f11b6efcfdfee0a6b021ac26008fa"
	Oct 29 09:39:25 default-k8s-diff-port-154565 kubelet[770]: E1029 09:39:25.949884     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sgx54_kubernetes-dashboard(f838510f-cd85-4a23-be9a-9b0b86aee2e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sgx54" podUID="f838510f-cd85-4a23-be9a-9b0b86aee2e3"
	Oct 29 09:39:39 default-k8s-diff-port-154565 kubelet[770]: I1029 09:39:39.279570     770 scope.go:117] "RemoveContainer" containerID="7e15281b78b57f2b7921b6796a2095f5d75f11b6efcfdfee0a6b021ac26008fa"
	Oct 29 09:39:39 default-k8s-diff-port-154565 kubelet[770]: I1029 09:39:39.706910     770 scope.go:117] "RemoveContainer" containerID="7e15281b78b57f2b7921b6796a2095f5d75f11b6efcfdfee0a6b021ac26008fa"
	Oct 29 09:39:39 default-k8s-diff-port-154565 kubelet[770]: I1029 09:39:39.707833     770 scope.go:117] "RemoveContainer" containerID="def5f21481b3d0e59948f1372921bdc8212525290f254d357bd5e810b72206c5"
	Oct 29 09:39:39 default-k8s-diff-port-154565 kubelet[770]: E1029 09:39:39.709780     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sgx54_kubernetes-dashboard(f838510f-cd85-4a23-be9a-9b0b86aee2e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sgx54" podUID="f838510f-cd85-4a23-be9a-9b0b86aee2e3"
	Oct 29 09:39:43 default-k8s-diff-port-154565 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 29 09:39:43 default-k8s-diff-port-154565 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 29 09:39:43 default-k8s-diff-port-154565 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [47c8964204a91d0b46d5e4ff09a253ddec6adc122582f93a5497e300ab1bf5ea] <==
	2025/10/29 09:39:05 Using namespace: kubernetes-dashboard
	2025/10/29 09:39:05 Using in-cluster config to connect to apiserver
	2025/10/29 09:39:05 Using secret token for csrf signing
	2025/10/29 09:39:05 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/29 09:39:05 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/29 09:39:05 Successful initial request to the apiserver, version: v1.34.1
	2025/10/29 09:39:05 Generating JWE encryption key
	2025/10/29 09:39:05 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/29 09:39:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/29 09:39:06 Initializing JWE encryption key from synchronized object
	2025/10/29 09:39:06 Creating in-cluster Sidecar client
	2025/10/29 09:39:06 Serving insecurely on HTTP port: 9090
	2025/10/29 09:39:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/29 09:39:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/29 09:39:05 Starting overwatch
	
	
	==> storage-provisioner [0c86ad951f717e434b3bc0751b40d09aee480039cdbb2d71d3b5aba02ca39db8] <==
	I1029 09:39:23.713452       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1029 09:39:23.726020       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1029 09:39:23.726076       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1029 09:39:23.729289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:39:27.184387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:39:31.444993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:39:35.042890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:39:38.097217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:39:41.119560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:39:41.124953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:39:41.125337       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1029 09:39:41.125562       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-154565_c6f440f5-071e-4166-81ce-7160908dbf51!
	I1029 09:39:41.125723       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"eb2a2ad0-3fcc-4033-a090-3abddb1b193f", APIVersion:"v1", ResourceVersion:"654", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-154565_c6f440f5-071e-4166-81ce-7160908dbf51 became leader
	W1029 09:39:41.131208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:39:41.143560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:39:41.232025       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-154565_c6f440f5-071e-4166-81ce-7160908dbf51!
	W1029 09:39:43.147204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:39:43.158608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:39:45.164663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:39:45.173017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:39:47.179173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:39:47.190332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [76deef5dfbe8964470407b18cf7e6c413662b0b3a9ea20f0b1ebd6bb5b990471] <==
	I1029 09:38:52.882702       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1029 09:39:22.887489       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-154565 -n default-k8s-diff-port-154565
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-154565 -n default-k8s-diff-port-154565: exit status 2 (348.682481ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-154565 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.50s)
E1029 09:45:24.583584    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:45:32.274223    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (255/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 9.51
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 5.54
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 157.91
31 TestAddons/serial/GCPAuth/Namespaces 0.25
32 TestAddons/serial/GCPAuth/FakeCredentials 10.83
48 TestAddons/StoppedEnableDisable 12.43
49 TestCertOptions 39.52
50 TestCertExpiration 244.8
52 TestForceSystemdFlag 40.81
53 TestForceSystemdEnv 42.06
58 TestErrorSpam/setup 34.04
59 TestErrorSpam/start 0.78
60 TestErrorSpam/status 1.08
61 TestErrorSpam/pause 6.75
62 TestErrorSpam/unpause 5.42
63 TestErrorSpam/stop 1.53
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 79.56
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 41.91
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.11
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.48
75 TestFunctional/serial/CacheCmd/cache/add_local 1.06
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.82
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
83 TestFunctional/serial/ExtraConfig 32.65
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.47
86 TestFunctional/serial/LogsFileCmd 1.51
87 TestFunctional/serial/InvalidService 4.71
89 TestFunctional/parallel/ConfigCmd 0.47
90 TestFunctional/parallel/DashboardCmd 10.71
91 TestFunctional/parallel/DryRun 0.45
92 TestFunctional/parallel/InternationalLanguage 0.3
93 TestFunctional/parallel/StatusCmd 1.32
98 TestFunctional/parallel/AddonsCmd 0.17
99 TestFunctional/parallel/PersistentVolumeClaim 25.61
101 TestFunctional/parallel/SSHCmd 0.67
102 TestFunctional/parallel/CpCmd 1.99
104 TestFunctional/parallel/FileSync 0.32
105 TestFunctional/parallel/CertSync 2.29
109 TestFunctional/parallel/NodeLabels 0.1
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.76
113 TestFunctional/parallel/License 0.43
114 TestFunctional/parallel/Version/short 0.08
115 TestFunctional/parallel/Version/components 0.96
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
120 TestFunctional/parallel/ImageCommands/ImageBuild 4.04
121 TestFunctional/parallel/ImageCommands/Setup 0.63
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
128 TestFunctional/parallel/ProfileCmd/profile_not_create 0.59
129 TestFunctional/parallel/ProfileCmd/profile_list 0.51
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.56
132 TestFunctional/parallel/ImageCommands/ImageRemove 0.69
135 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.64
137 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.38
140 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
141 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
145 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
147 TestFunctional/parallel/MountCmd/any-port 8
148 TestFunctional/parallel/MountCmd/specific-port 2.11
149 TestFunctional/parallel/MountCmd/VerifyCleanup 2.03
150 TestFunctional/parallel/ServiceCmd/List 0.61
151 TestFunctional/parallel/ServiceCmd/JSONOutput 0.63
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 211.43
163 TestMultiControlPlane/serial/DeployApp 7.32
164 TestMultiControlPlane/serial/PingHostFromPods 1.51
165 TestMultiControlPlane/serial/AddWorkerNode 61.85
166 TestMultiControlPlane/serial/NodeLabels 0.1
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.13
168 TestMultiControlPlane/serial/CopyFile 19.81
169 TestMultiControlPlane/serial/StopSecondaryNode 12.82
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.81
171 TestMultiControlPlane/serial/RestartSecondaryNode 28.06
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.23
176 TestMultiControlPlane/serial/StopCluster 24.21
177 TestMultiControlPlane/serial/RestartCluster 82.76
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.79
179 TestMultiControlPlane/serial/AddSecondaryNode 85.59
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.1
185 TestJSONOutput/start/Command 81.02
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.89
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.25
210 TestKicCustomNetwork/create_custom_network 41.18
211 TestKicCustomNetwork/use_default_bridge_network 36.38
212 TestKicExistingNetwork 38.03
213 TestKicCustomSubnet 37.84
214 TestKicStaticIP 37.5
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 71.52
219 TestMountStart/serial/StartWithMountFirst 9.49
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 10.06
222 TestMountStart/serial/VerifyMountSecond 0.28
223 TestMountStart/serial/DeleteFirst 1.81
224 TestMountStart/serial/VerifyMountPostDelete 0.3
225 TestMountStart/serial/Stop 1.3
226 TestMountStart/serial/RestartStopped 7.61
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 138.73
231 TestMultiNode/serial/DeployApp2Nodes 4.8
232 TestMultiNode/serial/PingHostFrom2Pods 0.91
233 TestMultiNode/serial/AddNode 58.27
234 TestMultiNode/serial/MultiNodeLabels 0.1
235 TestMultiNode/serial/ProfileList 0.72
236 TestMultiNode/serial/CopyFile 10.51
237 TestMultiNode/serial/StopNode 2.46
238 TestMultiNode/serial/StartAfterStop 8.16
239 TestMultiNode/serial/RestartKeepsNodes 78.63
240 TestMultiNode/serial/DeleteNode 5.96
241 TestMultiNode/serial/StopMultiNode 23.98
242 TestMultiNode/serial/RestartMultiNode 52.55
243 TestMultiNode/serial/ValidateNameConflict 36.98
250 TestScheduledStopUnix 109.72
253 TestInsufficientStorage 13.8
254 TestRunningBinaryUpgrade 56.27
256 TestKubernetesUpgrade 356.16
257 TestMissingContainerUpgrade 144.87
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 43.77
261 TestNoKubernetes/serial/StartWithStopK8s 9.04
262 TestNoKubernetes/serial/Start 10.49
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.43
264 TestNoKubernetes/serial/ProfileList 4.24
265 TestNoKubernetes/serial/Stop 1.4
266 TestNoKubernetes/serial/StartNoArgs 7.12
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
268 TestStoppedBinaryUpgrade/Setup 0.83
269 TestStoppedBinaryUpgrade/Upgrade 53.23
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.3
279 TestPause/serial/Start 84.99
280 TestPause/serial/SecondStartNoReconfiguration 28.89
289 TestNetworkPlugins/group/false 5.72
294 TestStartStop/group/old-k8s-version/serial/FirstStart 59.6
295 TestStartStop/group/old-k8s-version/serial/DeployApp 8.38
297 TestStartStop/group/old-k8s-version/serial/Stop 12.02
298 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
299 TestStartStop/group/old-k8s-version/serial/SecondStart 49.92
300 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
301 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.15
302 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.3
305 TestStartStop/group/no-preload/serial/FirstStart 77.52
307 TestStartStop/group/embed-certs/serial/FirstStart 86.22
308 TestStartStop/group/no-preload/serial/DeployApp 8.33
310 TestStartStop/group/no-preload/serial/Stop 12.05
311 TestStartStop/group/embed-certs/serial/DeployApp 9.35
312 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
313 TestStartStop/group/no-preload/serial/SecondStart 53.42
315 TestStartStop/group/embed-certs/serial/Stop 12.49
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
317 TestStartStop/group/embed-certs/serial/SecondStart 51.2
318 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
319 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
320 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
322 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 86.67
325 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.09
326 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
329 TestStartStop/group/newest-cni/serial/FirstStart 39.91
330 TestStartStop/group/newest-cni/serial/DeployApp 0
332 TestStartStop/group/newest-cni/serial/Stop 1.34
333 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
334 TestStartStop/group/newest-cni/serial/SecondStart 15.27
335 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
339 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.43
340 TestNetworkPlugins/group/auto/Start 88.82
342 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.13
343 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
344 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 57.9
345 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
346 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
347 TestNetworkPlugins/group/auto/KubeletFlags 0.38
348 TestNetworkPlugins/group/auto/NetCatPod 11.29
349 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.33
351 TestNetworkPlugins/group/auto/DNS 0.19
352 TestNetworkPlugins/group/auto/Localhost 0.2
353 TestNetworkPlugins/group/auto/HairPin 0.17
354 TestNetworkPlugins/group/kindnet/Start 84.12
355 TestNetworkPlugins/group/calico/Start 69.33
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
358 TestNetworkPlugins/group/kindnet/NetCatPod 11.26
359 TestNetworkPlugins/group/calico/ControllerPod 6.01
360 TestNetworkPlugins/group/calico/KubeletFlags 0.31
361 TestNetworkPlugins/group/calico/NetCatPod 10.29
362 TestNetworkPlugins/group/kindnet/DNS 0.23
363 TestNetworkPlugins/group/kindnet/Localhost 0.2
364 TestNetworkPlugins/group/kindnet/HairPin 0.31
365 TestNetworkPlugins/group/calico/DNS 0.23
366 TestNetworkPlugins/group/calico/Localhost 0.2
367 TestNetworkPlugins/group/calico/HairPin 0.17
368 TestNetworkPlugins/group/custom-flannel/Start 66.56
369 TestNetworkPlugins/group/enable-default-cni/Start 90.23
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.29
372 TestNetworkPlugins/group/custom-flannel/DNS 0.16
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
375 TestNetworkPlugins/group/flannel/Start 65.23
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.43
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.33
378 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
379 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
380 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
381 TestNetworkPlugins/group/bridge/Start 76.68
382 TestNetworkPlugins/group/flannel/ControllerPod 6
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.37
384 TestNetworkPlugins/group/flannel/NetCatPod 10.4
385 TestNetworkPlugins/group/flannel/DNS 0.53
386 TestNetworkPlugins/group/flannel/Localhost 0.17
387 TestNetworkPlugins/group/flannel/HairPin 0.15
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
389 TestNetworkPlugins/group/bridge/NetCatPod 10.27
390 TestNetworkPlugins/group/bridge/DNS 0.15
391 TestNetworkPlugins/group/bridge/Localhost 0.13
392 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (9.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-675275 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-675275 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.510814815s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (9.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1029 08:20:26.531943    4550 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1029 08:20:26.532024    4550 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-675275
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-675275: exit status 85 (92.448985ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-675275 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-675275 │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 08:20:17
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 08:20:17.065639    4555 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:20:17.065740    4555 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:20:17.065746    4555 out.go:374] Setting ErrFile to fd 2...
	I1029 08:20:17.065751    4555 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:20:17.066098    4555 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	W1029 08:20:17.066257    4555 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21800-2763/.minikube/config/config.json: open /home/jenkins/minikube-integration/21800-2763/.minikube/config/config.json: no such file or directory
	I1029 08:20:17.067606    4555 out.go:368] Setting JSON to true
	I1029 08:20:17.068409    4555 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":169,"bootTime":1761725848,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1029 08:20:17.068502    4555 start.go:143] virtualization:  
	I1029 08:20:17.072426    4555 out.go:99] [download-only-675275] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1029 08:20:17.072616    4555 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball: no such file or directory
	I1029 08:20:17.072671    4555 notify.go:221] Checking for updates...
	I1029 08:20:17.075458    4555 out.go:171] MINIKUBE_LOCATION=21800
	I1029 08:20:17.078246    4555 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 08:20:17.081109    4555 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 08:20:17.084056    4555 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	I1029 08:20:17.086965    4555 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1029 08:20:17.092646    4555 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1029 08:20:17.092909    4555 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 08:20:17.118915    4555 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1029 08:20:17.119014    4555 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:20:17.519830    4555 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-29 08:20:17.510279766 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 08:20:17.519933    4555 docker.go:319] overlay module found
	I1029 08:20:17.522912    4555 out.go:99] Using the docker driver based on user configuration
	I1029 08:20:17.522958    4555 start.go:309] selected driver: docker
	I1029 08:20:17.522971    4555 start.go:930] validating driver "docker" against <nil>
	I1029 08:20:17.523079    4555 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:20:17.577857    4555 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-29 08:20:17.568799587 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 08:20:17.578017    4555 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1029 08:20:17.578324    4555 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1029 08:20:17.578495    4555 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1029 08:20:17.581541    4555 out.go:171] Using Docker driver with root privileges
	I1029 08:20:17.584248    4555 cni.go:84] Creating CNI manager for ""
	I1029 08:20:17.584332    4555 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 08:20:17.584344    4555 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1029 08:20:17.584422    4555 start.go:353] cluster config:
	{Name:download-only-675275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-675275 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 08:20:17.587343    4555 out.go:99] Starting "download-only-675275" primary control-plane node in "download-only-675275" cluster
	I1029 08:20:17.587360    4555 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 08:20:17.590274    4555 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1029 08:20:17.590314    4555 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1029 08:20:17.590408    4555 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 08:20:17.606473    4555 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1029 08:20:17.606636    4555 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1029 08:20:17.606743    4555 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1029 08:20:17.652201    4555 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1029 08:20:17.652225    4555 cache.go:59] Caching tarball of preloaded images
	I1029 08:20:17.652376    4555 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1029 08:20:17.655704    4555 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1029 08:20:17.655727    4555 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1029 08:20:17.740141    4555 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1029 08:20:17.740271    4555 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-675275 host does not exist
	  To start a cluster, run: "minikube start -p download-only-675275"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-675275
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (5.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-968722 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-968722 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.539098622s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (5.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1029 08:20:32.532395    4550 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1029 08:20:32.532432    4550 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-968722
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-968722: exit status 85 (87.283069ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-675275 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-675275 │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │ 29 Oct 25 08:20 UTC │
	│ delete  │ -p download-only-675275                                                                                                                                                   │ download-only-675275 │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │ 29 Oct 25 08:20 UTC │
	│ start   │ -o=json --download-only -p download-only-968722 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-968722 │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 08:20:27
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 08:20:27.039337    4752 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:20:27.039471    4752 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:20:27.039481    4752 out.go:374] Setting ErrFile to fd 2...
	I1029 08:20:27.039487    4752 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:20:27.039766    4752 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 08:20:27.040178    4752 out.go:368] Setting JSON to true
	I1029 08:20:27.040957    4752 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":179,"bootTime":1761725848,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1029 08:20:27.041025    4752 start.go:143] virtualization:  
	I1029 08:20:27.044332    4752 out.go:99] [download-only-968722] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1029 08:20:27.044546    4752 notify.go:221] Checking for updates...
	I1029 08:20:27.047307    4752 out.go:171] MINIKUBE_LOCATION=21800
	I1029 08:20:27.050239    4752 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 08:20:27.053016    4752 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 08:20:27.055995    4752 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	I1029 08:20:27.058922    4752 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1029 08:20:27.064818    4752 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1029 08:20:27.065068    4752 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 08:20:27.090975    4752 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1029 08:20:27.091081    4752 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:20:27.159019    4752 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-29 08:20:27.14973186 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 08:20:27.159128    4752 docker.go:319] overlay module found
	I1029 08:20:27.162443    4752 out.go:99] Using the docker driver based on user configuration
	I1029 08:20:27.162564    4752 start.go:309] selected driver: docker
	I1029 08:20:27.162609    4752 start.go:930] validating driver "docker" against <nil>
	I1029 08:20:27.162952    4752 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:20:27.218050    4752 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-29 08:20:27.209086734 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 08:20:27.218206    4752 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1029 08:20:27.218482    4752 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1029 08:20:27.218642    4752 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1029 08:20:27.221639    4752 out.go:171] Using Docker driver with root privileges
	I1029 08:20:27.224449    4752 cni.go:84] Creating CNI manager for ""
	I1029 08:20:27.224519    4752 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 08:20:27.224533    4752 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1029 08:20:27.224615    4752 start.go:353] cluster config:
	{Name:download-only-968722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-968722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 08:20:27.227504    4752 out.go:99] Starting "download-only-968722" primary control-plane node in "download-only-968722" cluster
	I1029 08:20:27.227528    4752 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 08:20:27.230477    4752 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1029 08:20:27.230515    4752 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:20:27.230615    4752 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 08:20:27.247224    4752 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1029 08:20:27.247348    4752 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1029 08:20:27.247366    4752 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1029 08:20:27.247370    4752 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1029 08:20:27.247378    4752 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1029 08:20:27.299250    4752 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1029 08:20:27.299280    4752 cache.go:59] Caching tarball of preloaded images
	I1029 08:20:27.299437    4752 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:20:27.302660    4752 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1029 08:20:27.302695    4752 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1029 08:20:27.394233    4752 preload.go:290] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1029 08:20:27.394284    4752 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21800-2763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1029 08:20:31.935417    4752 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 08:20:31.935865    4752 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/download-only-968722/config.json ...
	I1029 08:20:31.935922    4752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/download-only-968722/config.json: {Name:mk522bf4f0fec433cdcb10cbbb69b6db211dd681 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:20:31.936141    4752 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:20:31.936359    4752 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21800-2763/.minikube/cache/linux/arm64/v1.34.1/kubectl
	
	
	* The control-plane node download-only-968722 host does not exist
	  To start a cluster, run: "minikube start -p download-only-968722"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-968722
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I1029 08:20:33.676753    4550 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-301132 --alsologtostderr --binary-mirror http://127.0.0.1:43123 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-301132" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-301132
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-757691
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-757691: exit status 85 (75.359272ms)

                                                
                                                
-- stdout --
	* Profile "addons-757691" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-757691"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-757691
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-757691: exit status 85 (62.859876ms)

                                                
                                                
-- stdout --
	* Profile "addons-757691" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-757691"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (157.91s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-757691 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-757691 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m37.910741082s)
--- PASS: TestAddons/Setup (157.91s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-757691 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-757691 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.25s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.83s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-757691 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-757691 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3136dfda-447e-4351-bffc-ab9f47a42a8b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3136dfda-447e-4351-bffc-ab9f47a42a8b] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003017254s
addons_test.go:694: (dbg) Run:  kubectl --context addons-757691 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-757691 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-757691 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-757691 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.83s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.43s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-757691
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-757691: (12.157692291s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-757691
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-757691
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-757691
--- PASS: TestAddons/StoppedEnableDisable (12.43s)

                                                
                                    
x
+
TestCertOptions (39.52s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-699236 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-699236 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (36.636015325s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-699236 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-699236 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-699236 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-699236" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-699236
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-699236: (2.113081302s)
--- PASS: TestCertOptions (39.52s)

                                                
                                    
x
+
TestCertExpiration (244.8s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-690444 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1029 09:30:24.584409    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-690444 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (41.785883546s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-690444 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-690444 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (20.328893295s)
helpers_test.go:175: Cleaning up "cert-expiration-690444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-690444
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-690444: (2.683008931s)
--- PASS: TestCertExpiration (244.80s)

                                                
                                    
x
+
TestForceSystemdFlag (40.81s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-894737 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-894737 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.914512195s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-894737 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-894737" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-894737
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-894737: (2.608910093s)
--- PASS: TestForceSystemdFlag (40.81s)

                                                
                                    
x
+
TestForceSystemdEnv (42.06s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-116185 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-116185 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.172245909s)
helpers_test.go:175: Cleaning up "force-systemd-env-116185" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-116185
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-116185: (2.891111016s)
--- PASS: TestForceSystemdEnv (42.06s)

                                                
                                    
x
+
TestErrorSpam/setup (34.04s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-490510 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-490510 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-490510 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-490510 --driver=docker  --container-runtime=crio: (34.041529374s)
--- PASS: TestErrorSpam/setup (34.04s)

                                                
                                    
x
+
TestErrorSpam/start (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-490510 --log_dir /tmp/nospam-490510 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-490510 --log_dir /tmp/nospam-490510 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-490510 --log_dir /tmp/nospam-490510 start --dry-run
--- PASS: TestErrorSpam/start (0.78s)

                                                
                                    
x
+
TestErrorSpam/status (1.08s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-490510 --log_dir /tmp/nospam-490510 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-490510 --log_dir /tmp/nospam-490510 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-490510 --log_dir /tmp/nospam-490510 status
--- PASS: TestErrorSpam/status (1.08s)

                                                
                                    
x
+
TestErrorSpam/pause (6.75s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-490510 --log_dir /tmp/nospam-490510 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-490510 --log_dir /tmp/nospam-490510 pause: exit status 80 (2.367863195s)

                                                
                                                
-- stdout --
	* Pausing node nospam-490510 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:27:12Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-490510 --log_dir /tmp/nospam-490510 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-490510 --log_dir /tmp/nospam-490510 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-490510 --log_dir /tmp/nospam-490510 pause: exit status 80 (2.193843282s)

                                                
                                                
-- stdout --
	* Pausing node nospam-490510 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:27:14Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-490510 --log_dir /tmp/nospam-490510 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-490510 --log_dir /tmp/nospam-490510 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-490510 --log_dir /tmp/nospam-490510 pause: exit status 80 (2.191168846s)

                                                
                                                
-- stdout --
	* Pausing node nospam-490510 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:27:16Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-490510 --log_dir /tmp/nospam-490510 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.75s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.42s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-490510 --log_dir /tmp/nospam-490510 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-490510 --log_dir /tmp/nospam-490510 unpause: exit status 80 (1.856694987s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-490510 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:27:18Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-490510 --log_dir /tmp/nospam-490510 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-490510 --log_dir /tmp/nospam-490510 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-490510 --log_dir /tmp/nospam-490510 unpause: exit status 80 (1.657016116s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-490510 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:27:19Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-490510 --log_dir /tmp/nospam-490510 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-490510 --log_dir /tmp/nospam-490510 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-490510 --log_dir /tmp/nospam-490510 unpause: exit status 80 (1.902452503s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-490510 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:27:21Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-490510 --log_dir /tmp/nospam-490510 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.42s)

                                                
                                    
x
+
TestErrorSpam/stop (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-490510 --log_dir /tmp/nospam-490510 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-490510 --log_dir /tmp/nospam-490510 stop: (1.334193793s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-490510 --log_dir /tmp/nospam-490510 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-490510 --log_dir /tmp/nospam-490510 stop
--- PASS: TestErrorSpam/stop (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21800-2763/.minikube/files/etc/test/nested/copy/4550/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.56s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-546837 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1029 08:28:13.400532    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:28:13.407040    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:28:13.418468    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:28:13.439906    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:28:13.481267    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:28:13.562689    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:28:13.724214    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:28:14.045747    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:28:14.687746    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:28:15.969139    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:28:18.534336    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:28:23.656498    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:28:33.898733    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-546837 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m19.562274945s)
--- PASS: TestFunctional/serial/StartWithProxy (79.56s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.91s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1029 08:28:47.168567    4550 config.go:182] Loaded profile config "functional-546837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-546837 --alsologtostderr -v=8
E1029 08:28:54.380373    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-546837 --alsologtostderr -v=8: (41.908883537s)
functional_test.go:678: soft start took 41.912363107s for "functional-546837" cluster.
I1029 08:29:29.077791    4550 config.go:182] Loaded profile config "functional-546837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (41.91s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-546837 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-546837 cache add registry.k8s.io/pause:3.1: (1.170274091s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-546837 cache add registry.k8s.io/pause:3.3: (1.195498582s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-546837 cache add registry.k8s.io/pause:latest: (1.113830486s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-546837 /tmp/TestFunctionalserialCacheCmdcacheadd_local3745531934/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 cache add minikube-local-cache-test:functional-546837
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 cache delete minikube-local-cache-test:functional-546837
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-546837
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-546837 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (302.01732ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 cache reload
E1029 08:29:35.342761    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 kubectl -- --context functional-546837 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-546837 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.65s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-546837 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-546837 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.650411972s)
functional_test.go:776: restart took 32.650500104s for "functional-546837" cluster.
I1029 08:30:09.115805    4550 config.go:182] Loaded profile config "functional-546837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (32.65s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-546837 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-546837 logs: (1.468179204s)
--- PASS: TestFunctional/serial/LogsCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 logs --file /tmp/TestFunctionalserialLogsFileCmd3787791535/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-546837 logs --file /tmp/TestFunctionalserialLogsFileCmd3787791535/001/logs.txt: (1.509849304s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.71s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-546837 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-546837
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-546837: exit status 115 (379.553307ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32050 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-546837 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-546837 delete -f testdata/invalidsvc.yaml: (1.079118578s)
--- PASS: TestFunctional/serial/InvalidService (4.71s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-546837 config get cpus: exit status 14 (89.159693ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-546837 config get cpus: exit status 14 (69.056863ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-546837 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-546837 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 32465: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.71s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-546837 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-546837 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (211.113785ms)

                                                
                                                
-- stdout --
	* [functional-546837] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21800
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:40:37.960595   30079 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:40:37.960709   30079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:40:37.960719   30079 out.go:374] Setting ErrFile to fd 2...
	I1029 08:40:37.960724   30079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:40:37.960977   30079 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 08:40:37.961346   30079 out.go:368] Setting JSON to false
	I1029 08:40:37.962216   30079 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1390,"bootTime":1761725848,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1029 08:40:37.962293   30079 start.go:143] virtualization:  
	I1029 08:40:37.965764   30079 out.go:179] * [functional-546837] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1029 08:40:37.968883   30079 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 08:40:37.968951   30079 notify.go:221] Checking for updates...
	I1029 08:40:37.975483   30079 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 08:40:37.978475   30079 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 08:40:37.981836   30079 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	I1029 08:40:37.984806   30079 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1029 08:40:37.987708   30079 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 08:40:37.991011   30079 config.go:182] Loaded profile config "functional-546837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:40:37.991584   30079 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 08:40:38.037690   30079 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1029 08:40:38.037797   30079 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:40:38.104255   30079 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-29 08:40:38.094712824 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 08:40:38.104400   30079 docker.go:319] overlay module found
	I1029 08:40:38.107448   30079 out.go:179] * Using the docker driver based on existing profile
	I1029 08:40:38.110320   30079 start.go:309] selected driver: docker
	I1029 08:40:38.110354   30079 start.go:930] validating driver "docker" against &{Name:functional-546837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-546837 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 08:40:38.110467   30079 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 08:40:38.114044   30079 out.go:203] 
	W1029 08:40:38.117021   30079 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1029 08:40:38.119991   30079 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-546837 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-546837 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-546837 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (301.848673ms)

                                                
                                                
-- stdout --
	* [functional-546837] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21800
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:40:51.919232   31991 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:40:51.919333   31991 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:40:51.919339   31991 out.go:374] Setting ErrFile to fd 2...
	I1029 08:40:51.919343   31991 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:40:51.920285   31991 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 08:40:51.920825   31991 out.go:368] Setting JSON to false
	I1029 08:40:51.922264   31991 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1404,"bootTime":1761725848,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1029 08:40:51.922325   31991 start.go:143] virtualization:  
	I1029 08:40:51.925747   31991 out.go:179] * [functional-546837] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1029 08:40:51.929616   31991 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 08:40:51.929794   31991 notify.go:221] Checking for updates...
	I1029 08:40:51.936012   31991 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 08:40:51.939084   31991 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 08:40:51.942621   31991 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	I1029 08:40:51.945521   31991 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1029 08:40:51.948447   31991 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 08:40:51.951832   31991 config.go:182] Loaded profile config "functional-546837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:40:51.952553   31991 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 08:40:51.992639   31991 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1029 08:40:51.992826   31991 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:40:52.098654   31991 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-29 08:40:52.087059051 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 08:40:52.098765   31991 docker.go:319] overlay module found
	I1029 08:40:52.101815   31991 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1029 08:40:52.104634   31991 start.go:309] selected driver: docker
	I1029 08:40:52.104653   31991 start.go:930] validating driver "docker" against &{Name:functional-546837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-546837 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 08:40:52.104745   31991 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 08:40:52.108247   31991 out.go:203] 
	W1029 08:40:52.111103   31991 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1029 08:40:52.113973   31991 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [c6c461e3-ebd6-471d-b186-b61ae9d0600d] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004377101s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-546837 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-546837 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-546837 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-546837 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [ea1977d2-d9af-42a4-bc3d-dae92d08b159] Pending
helpers_test.go:352: "sp-pod" [ea1977d2-d9af-42a4-bc3d-dae92d08b159] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [ea1977d2-d9af-42a4-bc3d-dae92d08b159] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003711271s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-546837 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-546837 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-546837 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [d8242913-43c3-42f1-940b-ebbf188fdad8] Pending
helpers_test.go:352: "sp-pod" [d8242913-43c3-42f1-940b-ebbf188fdad8] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003164261s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-546837 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.61s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh -n functional-546837 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 cp functional-546837:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2199916412/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh -n functional-546837 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh -n functional-546837 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4550/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh "sudo cat /etc/test/nested/copy/4550/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4550.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh "sudo cat /etc/ssl/certs/4550.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4550.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh "sudo cat /usr/share/ca-certificates/4550.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/45502.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh "sudo cat /etc/ssl/certs/45502.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/45502.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh "sudo cat /usr/share/ca-certificates/45502.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-546837 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-546837 ssh "sudo systemctl is-active docker": exit status 1 (398.26527ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-546837 ssh "sudo systemctl is-active containerd": exit status 1 (358.199173ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-546837 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ docker.io/library/nginx                 │ alpine             │ cbad6347cca28 │ 54.8MB │
│ docker.io/library/nginx                 │ latest             │ 46fabdd7f288c │ 176MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-546837 image ls --format table --alsologtostderr:
I1029 08:41:03.360700   33530 out.go:360] Setting OutFile to fd 1 ...
I1029 08:41:03.360903   33530 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1029 08:41:03.360931   33530 out.go:374] Setting ErrFile to fd 2...
I1029 08:41:03.360951   33530 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1029 08:41:03.361278   33530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
I1029 08:41:03.361972   33530 config.go:182] Loaded profile config "functional-546837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1029 08:41:03.362162   33530 config.go:182] Loaded profile config "functional-546837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1029 08:41:03.362726   33530 cli_runner.go:164] Run: docker container inspect functional-546837 --format={{.State.Status}}
I1029 08:41:03.381278   33530 ssh_runner.go:195] Run: systemctl --version
I1029 08:41:03.381339   33530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546837
I1029 08:41:03.398726   33530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/functional-546837/id_rsa Username:docker}
I1029 08:41:03.502824   33530 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-546837 image ls --format json --alsologtostderr:
[{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"46fabdd7f288c91a57f5d5fe12a02a41fbe855142469fcd50cbe885229064797","repoDigests":["docker.io/library/nginx@sha256:89a1bafe028b2980994d974115ee7268ef851a6eb7c9cb9626d8035b08ba4424","docker.io/library/nginx@sha256:b619c34a163ac12f68c1982568a122c4953dbf3126b8dbf0cc2f6fdbfd85de27"],"repoTags":["docker.io/library/nginx:latest"],"size":"176006680"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags"
:["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v2025
0512-df8de77b"],"size":"111333938"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90","docker.io/library/nginx@sha256:9dacca6749f2215cc3094f641c5b6662f7791e66a57ed034e8
06a7c48d51c18f"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54837949"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"s
ize":"519884"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d
0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-546837 image ls --format json --alsologtostderr:
I1029 08:41:03.117342   33494 out.go:360] Setting OutFile to fd 1 ...
I1029 08:41:03.117510   33494 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1029 08:41:03.117540   33494 out.go:374] Setting ErrFile to fd 2...
I1029 08:41:03.117562   33494 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1029 08:41:03.117858   33494 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
I1029 08:41:03.118569   33494 config.go:182] Loaded profile config "functional-546837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1029 08:41:03.118733   33494 config.go:182] Loaded profile config "functional-546837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1029 08:41:03.119206   33494 cli_runner.go:164] Run: docker container inspect functional-546837 --format={{.State.Status}}
I1029 08:41:03.139090   33494 ssh_runner.go:195] Run: systemctl --version
I1029 08:41:03.139161   33494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546837
I1029 08:41:03.156951   33494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/functional-546837/id_rsa Username:docker}
I1029 08:41:03.258937   33494 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-546837 image ls --format yaml --alsologtostderr:
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 46fabdd7f288c91a57f5d5fe12a02a41fbe855142469fcd50cbe885229064797
repoDigests:
- docker.io/library/nginx@sha256:89a1bafe028b2980994d974115ee7268ef851a6eb7c9cb9626d8035b08ba4424
- docker.io/library/nginx@sha256:b619c34a163ac12f68c1982568a122c4953dbf3126b8dbf0cc2f6fdbfd85de27
repoTags:
- docker.io/library/nginx:latest
size: "176006680"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90
- docker.io/library/nginx@sha256:9dacca6749f2215cc3094f641c5b6662f7791e66a57ed034e806a7c48d51c18f
repoTags:
- docker.io/library/nginx:alpine
size: "54837949"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-546837 image ls --format yaml --alsologtostderr:
I1029 08:41:02.880286   33457 out.go:360] Setting OutFile to fd 1 ...
I1029 08:41:02.880532   33457 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1029 08:41:02.880544   33457 out.go:374] Setting ErrFile to fd 2...
I1029 08:41:02.880551   33457 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1029 08:41:02.880871   33457 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
I1029 08:41:02.881512   33457 config.go:182] Loaded profile config "functional-546837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1029 08:41:02.881661   33457 config.go:182] Loaded profile config "functional-546837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1029 08:41:02.882223   33457 cli_runner.go:164] Run: docker container inspect functional-546837 --format={{.State.Status}}
I1029 08:41:02.899324   33457 ssh_runner.go:195] Run: systemctl --version
I1029 08:41:02.899384   33457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546837
I1029 08:41:02.917117   33457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/functional-546837/id_rsa Username:docker}
I1029 08:41:03.022098   33457 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-546837 ssh pgrep buildkitd: exit status 1 (366.110418ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 image build -t localhost/my-image:functional-546837 testdata/build --alsologtostderr
2025/10/29 08:41:02 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-546837 image build -t localhost/my-image:functional-546837 testdata/build --alsologtostderr: (3.419990487s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-546837 image build -t localhost/my-image:functional-546837 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> fbfdf989f1f
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-546837
--> 5fc3fa158b8
Successfully tagged localhost/my-image:functional-546837
5fc3fa158b84f7da552d7c7e63134e1f7e9b54e18cba154bb25c73122e253304
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-546837 image build -t localhost/my-image:functional-546837 testdata/build --alsologtostderr:
I1029 08:41:01.899919   33388 out.go:360] Setting OutFile to fd 1 ...
I1029 08:41:01.900161   33388 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1029 08:41:01.900188   33388 out.go:374] Setting ErrFile to fd 2...
I1029 08:41:01.900208   33388 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1029 08:41:01.903673   33388 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
I1029 08:41:01.904437   33388 config.go:182] Loaded profile config "functional-546837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1029 08:41:01.905077   33388 config.go:182] Loaded profile config "functional-546837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1029 08:41:01.905586   33388 cli_runner.go:164] Run: docker container inspect functional-546837 --format={{.State.Status}}
I1029 08:41:01.932832   33388 ssh_runner.go:195] Run: systemctl --version
I1029 08:41:01.932887   33388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-546837
I1029 08:41:01.960503   33388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/functional-546837/id_rsa Username:docker}
I1029 08:41:02.074536   33388 build_images.go:162] Building image from path: /tmp/build.1119506044.tar
I1029 08:41:02.074692   33388 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1029 08:41:02.085898   33388 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1119506044.tar
I1029 08:41:02.090299   33388 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1119506044.tar: stat -c "%s %y" /var/lib/minikube/build/build.1119506044.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1119506044.tar': No such file or directory
I1029 08:41:02.090379   33388 ssh_runner.go:362] scp /tmp/build.1119506044.tar --> /var/lib/minikube/build/build.1119506044.tar (3072 bytes)
I1029 08:41:02.110216   33388 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1119506044
I1029 08:41:02.119174   33388 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1119506044 -xf /var/lib/minikube/build/build.1119506044.tar
I1029 08:41:02.127973   33388 crio.go:315] Building image: /var/lib/minikube/build/build.1119506044
I1029 08:41:02.128136   33388 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-546837 /var/lib/minikube/build/build.1119506044 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1029 08:41:05.235100   33388 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-546837 /var/lib/minikube/build/build.1119506044 --cgroup-manager=cgroupfs: (3.106914874s)
I1029 08:41:05.235172   33388 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1119506044
I1029 08:41:05.244073   33388 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1119506044.tar
I1029 08:41:05.251708   33388 build_images.go:218] Built localhost/my-image:functional-546837 from /tmp/build.1119506044.tar
I1029 08:41:05.251741   33388 build_images.go:134] succeeded building to: functional-546837
I1029 08:41:05.251747   33388 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-546837
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "453.964634ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "57.198426ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "488.520396ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "66.823929ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 image rm kicbase/echo-server:functional-546837 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-546837 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-546837 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-546837 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-546837 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 28564: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-546837 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-546837 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [339d40c5-ba50-4557-b660-4b59925627e3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [339d40c5-ba50-4557-b660-4b59925627e3] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003628657s
I1029 08:30:33.965592    4550 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-546837 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.164.90 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-546837 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-546837 /tmp/TestFunctionalparallelMountCmdany-port2734391505/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761727238368033836" to /tmp/TestFunctionalparallelMountCmdany-port2734391505/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761727238368033836" to /tmp/TestFunctionalparallelMountCmdany-port2734391505/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761727238368033836" to /tmp/TestFunctionalparallelMountCmdany-port2734391505/001/test-1761727238368033836
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-546837 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (387.953375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1029 08:40:38.757215    4550 retry.go:31] will retry after 564.706337ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 29 08:40 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 29 08:40 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 29 08:40 test-1761727238368033836
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh cat /mount-9p/test-1761727238368033836
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-546837 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [5b810e9b-8562-42f6-930d-48ebfc019370] Pending
helpers_test.go:352: "busybox-mount" [5b810e9b-8562-42f6-930d-48ebfc019370] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [5b810e9b-8562-42f6-930d-48ebfc019370] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [5b810e9b-8562-42f6-930d-48ebfc019370] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.005767516s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-546837 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-546837 /tmp/TestFunctionalparallelMountCmdany-port2734391505/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-546837 /tmp/TestFunctionalparallelMountCmdspecific-port470108229/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-546837 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (380.209206ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1029 08:40:46.743326    4550 retry.go:31] will retry after 669.612516ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-546837 /tmp/TestFunctionalparallelMountCmdspecific-port470108229/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-546837 ssh "sudo umount -f /mount-9p": exit status 1 (282.142412ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-546837 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-546837 /tmp/TestFunctionalparallelMountCmdspecific-port470108229/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-546837 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2162841503/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-546837 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2162841503/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-546837 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2162841503/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-546837 ssh "findmnt -T" /mount1: exit status 1 (585.006262ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1029 08:40:49.058303    4550 retry.go:31] will retry after 546.837859ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-546837 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-546837 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2162841503/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-546837 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2162841503/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-546837 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2162841503/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-546837 service list -o json
functional_test.go:1504: Took "630.731886ms" to run "out/minikube-linux-arm64 -p functional-546837 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-546837
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-546837
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-546837
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (211.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1029 08:43:13.388699    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:44:36.468845    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-894836 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m30.527322545s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (211.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-894836 kubectl -- rollout status deployment/busybox: (4.471672062s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 kubectl -- exec busybox-7b57f96db7-fj895 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 kubectl -- exec busybox-7b57f96db7-gmd49 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 kubectl -- exec busybox-7b57f96db7-hl8ll -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 kubectl -- exec busybox-7b57f96db7-fj895 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 kubectl -- exec busybox-7b57f96db7-gmd49 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 kubectl -- exec busybox-7b57f96db7-hl8ll -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 kubectl -- exec busybox-7b57f96db7-fj895 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 kubectl -- exec busybox-7b57f96db7-gmd49 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 kubectl -- exec busybox-7b57f96db7-hl8ll -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 kubectl -- exec busybox-7b57f96db7-fj895 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 kubectl -- exec busybox-7b57f96db7-fj895 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 kubectl -- exec busybox-7b57f96db7-gmd49 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 kubectl -- exec busybox-7b57f96db7-gmd49 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 kubectl -- exec busybox-7b57f96db7-hl8ll -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 kubectl -- exec busybox-7b57f96db7-hl8ll -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (61.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 node add --alsologtostderr -v 5
E1029 08:45:24.584097    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:45:24.590501    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:45:24.601983    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:45:24.623350    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:45:24.664710    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:45:24.746311    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:45:24.907741    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:45:25.229350    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:45:25.871269    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:45:27.153009    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:45:29.714352    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:45:34.836159    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:45:45.077544    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-894836 node add --alsologtostderr -v 5: (1m0.757128038s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-894836 status --alsologtostderr -v 5: (1.097394189s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (61.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-894836 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.127392089s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-894836 status --output json --alsologtostderr -v 5: (1.008212358s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 cp testdata/cp-test.txt ha-894836:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 ssh -n ha-894836 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 cp ha-894836:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1145660143/001/cp-test_ha-894836.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 ssh -n ha-894836 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 cp ha-894836:/home/docker/cp-test.txt ha-894836-m02:/home/docker/cp-test_ha-894836_ha-894836-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 ssh -n ha-894836 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 ssh -n ha-894836-m02 "sudo cat /home/docker/cp-test_ha-894836_ha-894836-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 cp ha-894836:/home/docker/cp-test.txt ha-894836-m03:/home/docker/cp-test_ha-894836_ha-894836-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 ssh -n ha-894836 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 ssh -n ha-894836-m03 "sudo cat /home/docker/cp-test_ha-894836_ha-894836-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 cp ha-894836:/home/docker/cp-test.txt ha-894836-m04:/home/docker/cp-test_ha-894836_ha-894836-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 ssh -n ha-894836 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 ssh -n ha-894836-m04 "sudo cat /home/docker/cp-test_ha-894836_ha-894836-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 cp testdata/cp-test.txt ha-894836-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 ssh -n ha-894836-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 cp ha-894836-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1145660143/001/cp-test_ha-894836-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 ssh -n ha-894836-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 cp ha-894836-m02:/home/docker/cp-test.txt ha-894836:/home/docker/cp-test_ha-894836-m02_ha-894836.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 ssh -n ha-894836-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 ssh -n ha-894836 "sudo cat /home/docker/cp-test_ha-894836-m02_ha-894836.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 cp ha-894836-m02:/home/docker/cp-test.txt ha-894836-m03:/home/docker/cp-test_ha-894836-m02_ha-894836-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 ssh -n ha-894836-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 ssh -n ha-894836-m03 "sudo cat /home/docker/cp-test_ha-894836-m02_ha-894836-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 cp ha-894836-m02:/home/docker/cp-test.txt ha-894836-m04:/home/docker/cp-test_ha-894836-m02_ha-894836-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 ssh -n ha-894836-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 ssh -n ha-894836-m04 "sudo cat /home/docker/cp-test_ha-894836-m02_ha-894836-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 cp testdata/cp-test.txt ha-894836-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 ssh -n ha-894836-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 cp ha-894836-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1145660143/001/cp-test_ha-894836-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 ssh -n ha-894836-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 cp ha-894836-m03:/home/docker/cp-test.txt ha-894836:/home/docker/cp-test_ha-894836-m03_ha-894836.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 ssh -n ha-894836-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 ssh -n ha-894836 "sudo cat /home/docker/cp-test_ha-894836-m03_ha-894836.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 cp ha-894836-m03:/home/docker/cp-test.txt ha-894836-m02:/home/docker/cp-test_ha-894836-m03_ha-894836-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 ssh -n ha-894836-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 ssh -n ha-894836-m02 "sudo cat /home/docker/cp-test_ha-894836-m03_ha-894836-m02.txt"
E1029 08:46:05.562459    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 cp ha-894836-m03:/home/docker/cp-test.txt ha-894836-m04:/home/docker/cp-test_ha-894836-m03_ha-894836-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 ssh -n ha-894836-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 ssh -n ha-894836-m04 "sudo cat /home/docker/cp-test_ha-894836-m03_ha-894836-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 cp testdata/cp-test.txt ha-894836-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 ssh -n ha-894836-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 cp ha-894836-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1145660143/001/cp-test_ha-894836-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 ssh -n ha-894836-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 cp ha-894836-m04:/home/docker/cp-test.txt ha-894836:/home/docker/cp-test_ha-894836-m04_ha-894836.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 ssh -n ha-894836-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 ssh -n ha-894836 "sudo cat /home/docker/cp-test_ha-894836-m04_ha-894836.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 cp ha-894836-m04:/home/docker/cp-test.txt ha-894836-m02:/home/docker/cp-test_ha-894836-m04_ha-894836-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 ssh -n ha-894836-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 ssh -n ha-894836-m02 "sudo cat /home/docker/cp-test_ha-894836-m04_ha-894836-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 cp ha-894836-m04:/home/docker/cp-test.txt ha-894836-m03:/home/docker/cp-test_ha-894836-m04_ha-894836-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 ssh -n ha-894836-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 ssh -n ha-894836-m03 "sudo cat /home/docker/cp-test_ha-894836-m04_ha-894836-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-894836 node stop m02 --alsologtostderr -v 5: (12.029087678s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-894836 status --alsologtostderr -v 5: exit status 7 (786.263255ms)

                                                
                                                
-- stdout --
	ha-894836
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-894836-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-894836-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-894836-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:46:23.576001   48329 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:46:23.576136   48329 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:46:23.576149   48329 out.go:374] Setting ErrFile to fd 2...
	I1029 08:46:23.576156   48329 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:46:23.576528   48329 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 08:46:23.576764   48329 out.go:368] Setting JSON to false
	I1029 08:46:23.576793   48329 mustload.go:66] Loading cluster: ha-894836
	I1029 08:46:23.577972   48329 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:46:23.578027   48329 notify.go:221] Checking for updates...
	I1029 08:46:23.578031   48329 status.go:174] checking status of ha-894836 ...
	I1029 08:46:23.578646   48329 cli_runner.go:164] Run: docker container inspect ha-894836 --format={{.State.Status}}
	I1029 08:46:23.597136   48329 status.go:371] ha-894836 host status = "Running" (err=<nil>)
	I1029 08:46:23.597159   48329 host.go:66] Checking if "ha-894836" exists ...
	I1029 08:46:23.597707   48329 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836
	I1029 08:46:23.633329   48329 host.go:66] Checking if "ha-894836" exists ...
	I1029 08:46:23.633700   48329 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 08:46:23.633757   48329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836
	I1029 08:46:23.658405   48329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836/id_rsa Username:docker}
	I1029 08:46:23.769965   48329 ssh_runner.go:195] Run: systemctl --version
	I1029 08:46:23.776639   48329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 08:46:23.790160   48329 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:46:23.849418   48329 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-29 08:46:23.839959028 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 08:46:23.850381   48329 kubeconfig.go:125] found "ha-894836" server: "https://192.168.49.254:8443"
	I1029 08:46:23.850422   48329 api_server.go:166] Checking apiserver status ...
	I1029 08:46:23.850469   48329 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:46:23.862172   48329 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1251/cgroup
	I1029 08:46:23.871170   48329 api_server.go:182] apiserver freezer: "11:freezer:/docker/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577/crio/crio-3241d45cba23b3f26ac3f1eb1c124ae6222c3359fca82d4798f2eee2daf144c3"
	I1029 08:46:23.871245   48329 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/40404985106a41f978a2700fe6ada27a3c928cffbb0862a2ffc10f28a20d0577/crio/crio-3241d45cba23b3f26ac3f1eb1c124ae6222c3359fca82d4798f2eee2daf144c3/freezer.state
	I1029 08:46:23.878881   48329 api_server.go:204] freezer state: "THAWED"
	I1029 08:46:23.878917   48329 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1029 08:46:23.887415   48329 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1029 08:46:23.887449   48329 status.go:463] ha-894836 apiserver status = Running (err=<nil>)
	I1029 08:46:23.887461   48329 status.go:176] ha-894836 status: &{Name:ha-894836 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1029 08:46:23.887481   48329 status.go:174] checking status of ha-894836-m02 ...
	I1029 08:46:23.887772   48329 cli_runner.go:164] Run: docker container inspect ha-894836-m02 --format={{.State.Status}}
	I1029 08:46:23.904630   48329 status.go:371] ha-894836-m02 host status = "Stopped" (err=<nil>)
	I1029 08:46:23.904655   48329 status.go:384] host is not running, skipping remaining checks
	I1029 08:46:23.904663   48329 status.go:176] ha-894836-m02 status: &{Name:ha-894836-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1029 08:46:23.904683   48329 status.go:174] checking status of ha-894836-m03 ...
	I1029 08:46:23.904987   48329 cli_runner.go:164] Run: docker container inspect ha-894836-m03 --format={{.State.Status}}
	I1029 08:46:23.923581   48329 status.go:371] ha-894836-m03 host status = "Running" (err=<nil>)
	I1029 08:46:23.923602   48329 host.go:66] Checking if "ha-894836-m03" exists ...
	I1029 08:46:23.924090   48329 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836-m03
	I1029 08:46:23.941390   48329 host.go:66] Checking if "ha-894836-m03" exists ...
	I1029 08:46:23.941696   48329 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 08:46:23.941740   48329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m03
	I1029 08:46:23.959696   48329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m03/id_rsa Username:docker}
	I1029 08:46:24.070334   48329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 08:46:24.083864   48329 kubeconfig.go:125] found "ha-894836" server: "https://192.168.49.254:8443"
	I1029 08:46:24.083895   48329 api_server.go:166] Checking apiserver status ...
	I1029 08:46:24.083942   48329 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:46:24.096376   48329 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup
	I1029 08:46:24.105253   48329 api_server.go:182] apiserver freezer: "11:freezer:/docker/1037fcd9f8f464f04f46e8afe71dd1459d6a3ea7c259f500ee73f918e22eaca3/crio/crio-ea3e3d62292acdb400dceaf61f5a6a57d32f2af4b187cf3fc4743c30888dc802"
	I1029 08:46:24.105338   48329 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1037fcd9f8f464f04f46e8afe71dd1459d6a3ea7c259f500ee73f918e22eaca3/crio/crio-ea3e3d62292acdb400dceaf61f5a6a57d32f2af4b187cf3fc4743c30888dc802/freezer.state
	I1029 08:46:24.113402   48329 api_server.go:204] freezer state: "THAWED"
	I1029 08:46:24.113432   48329 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1029 08:46:24.121767   48329 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1029 08:46:24.121846   48329 status.go:463] ha-894836-m03 apiserver status = Running (err=<nil>)
	I1029 08:46:24.121869   48329 status.go:176] ha-894836-m03 status: &{Name:ha-894836-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1029 08:46:24.121915   48329 status.go:174] checking status of ha-894836-m04 ...
	I1029 08:46:24.122270   48329 cli_runner.go:164] Run: docker container inspect ha-894836-m04 --format={{.State.Status}}
	I1029 08:46:24.139902   48329 status.go:371] ha-894836-m04 host status = "Running" (err=<nil>)
	I1029 08:46:24.139927   48329 host.go:66] Checking if "ha-894836-m04" exists ...
	I1029 08:46:24.140206   48329 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-894836-m04
	I1029 08:46:24.156947   48329 host.go:66] Checking if "ha-894836-m04" exists ...
	I1029 08:46:24.157251   48329 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 08:46:24.157308   48329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-894836-m04
	I1029 08:46:24.174498   48329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/ha-894836-m04/id_rsa Username:docker}
	I1029 08:46:24.291046   48329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 08:46:24.304650   48329 status.go:176] ha-894836-m04 status: &{Name:ha-894836-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (28.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 node start m02 --alsologtostderr -v 5
E1029 08:46:46.524282    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-894836 node start m02 --alsologtostderr -v 5: (26.67982018s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-894836 status --alsologtostderr -v 5: (1.248728245s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (28.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.227949749s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (24.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-894836 stop --alsologtostderr -v 5: (24.100594533s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-894836 status --alsologtostderr -v 5: exit status 7 (111.601456ms)

                                                
                                                
-- stdout --
	ha-894836
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-894836-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-894836-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:56:20.365582   59393 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:56:20.365698   59393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:56:20.365707   59393 out.go:374] Setting ErrFile to fd 2...
	I1029 08:56:20.365712   59393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:56:20.365978   59393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 08:56:20.366181   59393 out.go:368] Setting JSON to false
	I1029 08:56:20.366216   59393 mustload.go:66] Loading cluster: ha-894836
	I1029 08:56:20.366302   59393 notify.go:221] Checking for updates...
	I1029 08:56:20.366618   59393 config.go:182] Loaded profile config "ha-894836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:56:20.366637   59393 status.go:174] checking status of ha-894836 ...
	I1029 08:56:20.367193   59393 cli_runner.go:164] Run: docker container inspect ha-894836 --format={{.State.Status}}
	I1029 08:56:20.386321   59393 status.go:371] ha-894836 host status = "Stopped" (err=<nil>)
	I1029 08:56:20.386341   59393 status.go:384] host is not running, skipping remaining checks
	I1029 08:56:20.386348   59393 status.go:176] ha-894836 status: &{Name:ha-894836 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1029 08:56:20.386371   59393 status.go:174] checking status of ha-894836-m02 ...
	I1029 08:56:20.386676   59393 cli_runner.go:164] Run: docker container inspect ha-894836-m02 --format={{.State.Status}}
	I1029 08:56:20.407929   59393 status.go:371] ha-894836-m02 host status = "Stopped" (err=<nil>)
	I1029 08:56:20.407947   59393 status.go:384] host is not running, skipping remaining checks
	I1029 08:56:20.407954   59393 status.go:176] ha-894836-m02 status: &{Name:ha-894836-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1029 08:56:20.407972   59393 status.go:174] checking status of ha-894836-m04 ...
	I1029 08:56:20.408241   59393 cli_runner.go:164] Run: docker container inspect ha-894836-m04 --format={{.State.Status}}
	I1029 08:56:20.427828   59393 status.go:371] ha-894836-m04 host status = "Stopped" (err=<nil>)
	I1029 08:56:20.427848   59393 status.go:384] host is not running, skipping remaining checks
	I1029 08:56:20.427855   59393 status.go:176] ha-894836-m04 status: &{Name:ha-894836-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (24.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (82.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-894836 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m21.691401679s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (82.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (85.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 node add --control-plane --alsologtostderr -v 5
E1029 08:58:13.391424    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-894836 node add --control-plane --alsologtostderr -v 5: (1m24.435501103s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-894836 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-894836 status --alsologtostderr -v 5: (1.150712232s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (85.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.100642439s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.10s)

                                                
                                    
x
+
TestJSONOutput/start/Command (81.02s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-852077 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1029 09:00:24.583788    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-852077 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m21.014116107s)
--- PASS: TestJSONOutput/start/Command (81.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.89s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-852077 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-852077 --output=json --user=testUser: (5.892507946s)
--- PASS: TestJSONOutput/stop/Command (5.89s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-586112 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-586112 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (97.730012ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c0eea26c-af38-4247-a76a-057ae95162f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-586112] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f9e3e06f-c2ab-49ac-ab9c-f74b7b8ea029","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21800"}}
	{"specversion":"1.0","id":"ec048415-80ff-4899-83e6-46cc6dd3e55e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d66b7574-6817-44dc-9def-ae50048fee5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig"}}
	{"specversion":"1.0","id":"0d61b210-a352-43da-b058-5b9cbef8a5c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube"}}
	{"specversion":"1.0","id":"6cc25851-ad60-4611-b889-353d3f7680d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"38392483-7411-4300-9101-b3dbb025eebc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3161554d-ec47-4f5f-8dca-326d3c650a7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-586112" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-586112
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.18s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-181379 --network=
E1029 09:01:16.471537    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-181379 --network=: (38.93223269s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-181379" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-181379
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-181379: (2.227779865s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.18s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.38s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-915961 --network=bridge
E1029 09:01:47.650610    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-915961 --network=bridge: (34.21434727s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-915961" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-915961
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-915961: (2.138206557s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.38s)

                                                
                                    
x
+
TestKicExistingNetwork (38.03s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1029 09:02:18.433746    4550 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1029 09:02:18.448293    4550 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1029 09:02:18.448396    4550 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1029 09:02:18.448413    4550 cli_runner.go:164] Run: docker network inspect existing-network
W1029 09:02:18.464287    4550 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1029 09:02:18.464337    4550 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1029 09:02:18.464352    4550 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1029 09:02:18.464449    4550 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1029 09:02:18.480873    4550 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0687088684ea IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8e:e2:78:39:db:9c} reservation:<nil>}
I1029 09:02:18.481132    4550 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bcfee0}
I1029 09:02:18.481152    4550 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1029 09:02:18.481204    4550 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1029 09:02:18.540412    4550 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-439456 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-439456 --network=existing-network: (35.748713009s)
helpers_test.go:175: Cleaning up "existing-network-439456" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-439456
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-439456: (2.138014931s)
I1029 09:02:56.451058    4550 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (38.03s)

                                                
                                    
x
+
TestKicCustomSubnet (37.84s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-107908 --subnet=192.168.60.0/24
E1029 09:03:13.392548    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-107908 --subnet=192.168.60.0/24: (35.58968716s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-107908 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-107908" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-107908
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-107908: (2.225093852s)
--- PASS: TestKicCustomSubnet (37.84s)

                                                
                                    
x
+
TestKicStaticIP (37.5s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-732900 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-732900 --static-ip=192.168.200.200: (35.124903179s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-732900 ip
helpers_test.go:175: Cleaning up "static-ip-732900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-732900
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-732900: (2.216727803s)
--- PASS: TestKicStaticIP (37.50s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (71.52s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-657297 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-657297 --driver=docker  --container-runtime=crio: (33.23110317s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-659907 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-659907 --driver=docker  --container-runtime=crio: (32.730754051s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-657297
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-659907
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-659907" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-659907
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-659907: (2.096518262s)
helpers_test.go:175: Cleaning up "first-657297" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-657297
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-657297: (2.034638044s)
--- PASS: TestMinikubeProfile (71.52s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.49s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-851118 --memory=3072 --mount-string /tmp/TestMountStartserial4053958962/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E1029 09:05:24.583530    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-851118 --memory=3072 --mount-string /tmp/TestMountStartserial4053958962/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.491990932s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-851118 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (10.06s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-853054 --memory=3072 --mount-string /tmp/TestMountStartserial4053958962/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-853054 --memory=3072 --mount-string /tmp/TestMountStartserial4053958962/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (9.059674038s)
--- PASS: TestMountStart/serial/StartWithMountSecond (10.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-853054 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.81s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-851118 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-851118 --alsologtostderr -v=5: (1.814763644s)
--- PASS: TestMountStart/serial/DeleteFirst (1.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-853054 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-853054
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-853054: (1.29818188s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.61s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-853054
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-853054: (6.607194185s)
--- PASS: TestMountStart/serial/RestartStopped (7.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-853054 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (138.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-279229 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1029 09:08:13.387699    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-279229 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m18.1958394s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (138.73s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-279229 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-279229 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-279229 -- rollout status deployment/busybox: (3.012947437s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-279229 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-279229 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-279229 -- exec busybox-7b57f96db7-ccxh6 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-279229 -- exec busybox-7b57f96db7-w5fbp -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-279229 -- exec busybox-7b57f96db7-ccxh6 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-279229 -- exec busybox-7b57f96db7-w5fbp -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-279229 -- exec busybox-7b57f96db7-ccxh6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-279229 -- exec busybox-7b57f96db7-w5fbp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.80s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-279229 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-279229 -- exec busybox-7b57f96db7-ccxh6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-279229 -- exec busybox-7b57f96db7-ccxh6 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-279229 -- exec busybox-7b57f96db7-w5fbp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-279229 -- exec busybox-7b57f96db7-w5fbp -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-279229 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-279229 -v=5 --alsologtostderr: (57.56712005s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.27s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-279229 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 cp testdata/cp-test.txt multinode-279229:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 ssh -n multinode-279229 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 cp multinode-279229:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3836144762/001/cp-test_multinode-279229.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 ssh -n multinode-279229 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 cp multinode-279229:/home/docker/cp-test.txt multinode-279229-m02:/home/docker/cp-test_multinode-279229_multinode-279229-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 ssh -n multinode-279229 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 ssh -n multinode-279229-m02 "sudo cat /home/docker/cp-test_multinode-279229_multinode-279229-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 cp multinode-279229:/home/docker/cp-test.txt multinode-279229-m03:/home/docker/cp-test_multinode-279229_multinode-279229-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 ssh -n multinode-279229 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 ssh -n multinode-279229-m03 "sudo cat /home/docker/cp-test_multinode-279229_multinode-279229-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 cp testdata/cp-test.txt multinode-279229-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 ssh -n multinode-279229-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 cp multinode-279229-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3836144762/001/cp-test_multinode-279229-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 ssh -n multinode-279229-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 cp multinode-279229-m02:/home/docker/cp-test.txt multinode-279229:/home/docker/cp-test_multinode-279229-m02_multinode-279229.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 ssh -n multinode-279229-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 ssh -n multinode-279229 "sudo cat /home/docker/cp-test_multinode-279229-m02_multinode-279229.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 cp multinode-279229-m02:/home/docker/cp-test.txt multinode-279229-m03:/home/docker/cp-test_multinode-279229-m02_multinode-279229-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 ssh -n multinode-279229-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 ssh -n multinode-279229-m03 "sudo cat /home/docker/cp-test_multinode-279229-m02_multinode-279229-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 cp testdata/cp-test.txt multinode-279229-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 ssh -n multinode-279229-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 cp multinode-279229-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3836144762/001/cp-test_multinode-279229-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 ssh -n multinode-279229-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 cp multinode-279229-m03:/home/docker/cp-test.txt multinode-279229:/home/docker/cp-test_multinode-279229-m03_multinode-279229.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 ssh -n multinode-279229-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 ssh -n multinode-279229 "sudo cat /home/docker/cp-test_multinode-279229-m03_multinode-279229.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 cp multinode-279229-m03:/home/docker/cp-test.txt multinode-279229-m02:/home/docker/cp-test_multinode-279229-m03_multinode-279229-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 ssh -n multinode-279229-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 ssh -n multinode-279229-m02 "sudo cat /home/docker/cp-test_multinode-279229-m03_multinode-279229-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.51s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-279229 node stop m03: (1.337315532s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-279229 status: exit status 7 (560.531221ms)

                                                
                                                
-- stdout --
	multinode-279229
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-279229-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-279229-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-279229 status --alsologtostderr: exit status 7 (557.970785ms)

                                                
                                                
-- stdout --
	multinode-279229
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-279229-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-279229-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 09:09:32.659064  110087 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:09:32.659248  110087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:09:32.659262  110087 out.go:374] Setting ErrFile to fd 2...
	I1029 09:09:32.659268  110087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:09:32.659563  110087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 09:09:32.659781  110087 out.go:368] Setting JSON to false
	I1029 09:09:32.659830  110087 mustload.go:66] Loading cluster: multinode-279229
	I1029 09:09:32.659919  110087 notify.go:221] Checking for updates...
	I1029 09:09:32.660348  110087 config.go:182] Loaded profile config "multinode-279229": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:09:32.660367  110087 status.go:174] checking status of multinode-279229 ...
	I1029 09:09:32.661223  110087 cli_runner.go:164] Run: docker container inspect multinode-279229 --format={{.State.Status}}
	I1029 09:09:32.680490  110087 status.go:371] multinode-279229 host status = "Running" (err=<nil>)
	I1029 09:09:32.680516  110087 host.go:66] Checking if "multinode-279229" exists ...
	I1029 09:09:32.680818  110087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-279229
	I1029 09:09:32.706109  110087 host.go:66] Checking if "multinode-279229" exists ...
	I1029 09:09:32.706415  110087 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 09:09:32.706459  110087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-279229
	I1029 09:09:32.726146  110087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/multinode-279229/id_rsa Username:docker}
	I1029 09:09:32.833688  110087 ssh_runner.go:195] Run: systemctl --version
	I1029 09:09:32.841196  110087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:09:32.854914  110087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:09:32.931440  110087 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-29 09:09:32.921449531 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:09:32.932108  110087 kubeconfig.go:125] found "multinode-279229" server: "https://192.168.67.2:8443"
	I1029 09:09:32.932146  110087 api_server.go:166] Checking apiserver status ...
	I1029 09:09:32.932202  110087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:09:32.944221  110087 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1244/cgroup
	I1029 09:09:32.952911  110087 api_server.go:182] apiserver freezer: "11:freezer:/docker/b31d37758d77528748b4d266e94bb150f9163748871800f8e7f35436afa49649/crio/crio-49ebb9235a3513e90f6a3b0034edd28e6c3a05220acc89bf56905317273c719a"
	I1029 09:09:32.952987  110087 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b31d37758d77528748b4d266e94bb150f9163748871800f8e7f35436afa49649/crio/crio-49ebb9235a3513e90f6a3b0034edd28e6c3a05220acc89bf56905317273c719a/freezer.state
	I1029 09:09:32.960862  110087 api_server.go:204] freezer state: "THAWED"
	I1029 09:09:32.960888  110087 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1029 09:09:32.969569  110087 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1029 09:09:32.969603  110087 status.go:463] multinode-279229 apiserver status = Running (err=<nil>)
	I1029 09:09:32.969615  110087 status.go:176] multinode-279229 status: &{Name:multinode-279229 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1029 09:09:32.969632  110087 status.go:174] checking status of multinode-279229-m02 ...
	I1029 09:09:32.969974  110087 cli_runner.go:164] Run: docker container inspect multinode-279229-m02 --format={{.State.Status}}
	I1029 09:09:32.986592  110087 status.go:371] multinode-279229-m02 host status = "Running" (err=<nil>)
	I1029 09:09:32.986621  110087 host.go:66] Checking if "multinode-279229-m02" exists ...
	I1029 09:09:32.986940  110087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-279229-m02
	I1029 09:09:33.007573  110087 host.go:66] Checking if "multinode-279229-m02" exists ...
	I1029 09:09:33.007898  110087 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 09:09:33.007953  110087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-279229-m02
	I1029 09:09:33.027313  110087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21800-2763/.minikube/machines/multinode-279229-m02/id_rsa Username:docker}
	I1029 09:09:33.129245  110087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:09:33.142842  110087 status.go:176] multinode-279229-m02 status: &{Name:multinode-279229-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1029 09:09:33.142872  110087 status.go:174] checking status of multinode-279229-m03 ...
	I1029 09:09:33.143176  110087 cli_runner.go:164] Run: docker container inspect multinode-279229-m03 --format={{.State.Status}}
	I1029 09:09:33.160532  110087 status.go:371] multinode-279229-m03 host status = "Stopped" (err=<nil>)
	I1029 09:09:33.160551  110087 status.go:384] host is not running, skipping remaining checks
	I1029 09:09:33.160558  110087 status.go:176] multinode-279229-m03 status: &{Name:multinode-279229-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.46s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-279229 node start m03 -v=5 --alsologtostderr: (7.355711959s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.16s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-279229
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-279229
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-279229: (25.061147023s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-279229 --wait=true -v=5 --alsologtostderr
E1029 09:10:24.583721    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-279229 --wait=true -v=5 --alsologtostderr: (53.454365783s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-279229
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.63s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-279229 node delete m03: (5.203968048s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.96s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-279229 stop: (23.80511009s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-279229 status: exit status 7 (91.711106ms)

                                                
                                                
-- stdout --
	multinode-279229
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-279229-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-279229 status --alsologtostderr: exit status 7 (86.990541ms)

                                                
                                                
-- stdout --
	multinode-279229
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-279229-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 09:11:29.854041  117877 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:11:29.854151  117877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:11:29.854162  117877 out.go:374] Setting ErrFile to fd 2...
	I1029 09:11:29.854166  117877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:11:29.854434  117877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 09:11:29.854616  117877 out.go:368] Setting JSON to false
	I1029 09:11:29.854648  117877 mustload.go:66] Loading cluster: multinode-279229
	I1029 09:11:29.855027  117877 config.go:182] Loaded profile config "multinode-279229": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:11:29.855045  117877 status.go:174] checking status of multinode-279229 ...
	I1029 09:11:29.855522  117877 cli_runner.go:164] Run: docker container inspect multinode-279229 --format={{.State.Status}}
	I1029 09:11:29.855743  117877 notify.go:221] Checking for updates...
	I1029 09:11:29.873701  117877 status.go:371] multinode-279229 host status = "Stopped" (err=<nil>)
	I1029 09:11:29.873724  117877 status.go:384] host is not running, skipping remaining checks
	I1029 09:11:29.873730  117877 status.go:176] multinode-279229 status: &{Name:multinode-279229 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1029 09:11:29.873754  117877 status.go:174] checking status of multinode-279229-m02 ...
	I1029 09:11:29.874062  117877 cli_runner.go:164] Run: docker container inspect multinode-279229-m02 --format={{.State.Status}}
	I1029 09:11:29.893886  117877 status.go:371] multinode-279229-m02 host status = "Stopped" (err=<nil>)
	I1029 09:11:29.893907  117877 status.go:384] host is not running, skipping remaining checks
	I1029 09:11:29.893914  117877 status.go:176] multinode-279229-m02 status: &{Name:multinode-279229-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-279229 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-279229 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (51.828323577s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279229 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.55s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-279229
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-279229-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-279229-m02 --driver=docker  --container-runtime=crio: exit status 14 (88.60587ms)

                                                
                                                
-- stdout --
	* [multinode-279229-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21800
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-279229-m02' is duplicated with machine name 'multinode-279229-m02' in profile 'multinode-279229'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-279229-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-279229-m03 --driver=docker  --container-runtime=crio: (34.423480862s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-279229
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-279229: exit status 80 (337.242755ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-279229 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-279229-m03 already exists in multinode-279229-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-279229-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-279229-m03: (2.075617207s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.98s)

                                                
                                    
x
+
TestScheduledStopUnix (109.72s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-409035 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-409035 --memory=3072 --driver=docker  --container-runtime=crio: (33.744673124s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-409035 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-409035 -n scheduled-stop-409035
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-409035 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1029 09:21:05.582404    4550 retry.go:31] will retry after 128.169µs: open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/scheduled-stop-409035/pid: no such file or directory
I1029 09:21:05.584418    4550 retry.go:31] will retry after 160.215µs: open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/scheduled-stop-409035/pid: no such file or directory
I1029 09:21:05.585573    4550 retry.go:31] will retry after 332.933µs: open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/scheduled-stop-409035/pid: no such file or directory
I1029 09:21:05.586703    4550 retry.go:31] will retry after 443.611µs: open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/scheduled-stop-409035/pid: no such file or directory
I1029 09:21:05.587826    4550 retry.go:31] will retry after 565.299µs: open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/scheduled-stop-409035/pid: no such file or directory
I1029 09:21:05.588943    4550 retry.go:31] will retry after 437.201µs: open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/scheduled-stop-409035/pid: no such file or directory
I1029 09:21:05.590057    4550 retry.go:31] will retry after 635.71µs: open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/scheduled-stop-409035/pid: no such file or directory
I1029 09:21:05.591170    4550 retry.go:31] will retry after 1.899219ms: open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/scheduled-stop-409035/pid: no such file or directory
I1029 09:21:05.593353    4550 retry.go:31] will retry after 3.76807ms: open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/scheduled-stop-409035/pid: no such file or directory
I1029 09:21:05.597562    4550 retry.go:31] will retry after 2.797186ms: open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/scheduled-stop-409035/pid: no such file or directory
I1029 09:21:05.600767    4550 retry.go:31] will retry after 7.182852ms: open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/scheduled-stop-409035/pid: no such file or directory
I1029 09:21:05.608990    4550 retry.go:31] will retry after 7.982667ms: open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/scheduled-stop-409035/pid: no such file or directory
I1029 09:21:05.617485    4550 retry.go:31] will retry after 18.454612ms: open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/scheduled-stop-409035/pid: no such file or directory
I1029 09:21:05.636728    4550 retry.go:31] will retry after 13.13867ms: open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/scheduled-stop-409035/pid: no such file or directory
I1029 09:21:05.650962    4550 retry.go:31] will retry after 28.88752ms: open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/scheduled-stop-409035/pid: no such file or directory
I1029 09:21:05.682114    4550 retry.go:31] will retry after 47.486529ms: open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/scheduled-stop-409035/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-409035 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-409035 -n scheduled-stop-409035
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-409035
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-409035 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-409035
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-409035: exit status 7 (68.361011ms)

                                                
                                                
-- stdout --
	scheduled-stop-409035
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-409035 -n scheduled-stop-409035
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-409035 -n scheduled-stop-409035: exit status 7 (66.619537ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-409035" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-409035
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-409035: (4.338317755s)
--- PASS: TestScheduledStopUnix (109.72s)

                                                
                                    
x
+
TestInsufficientStorage (13.8s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-312020 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-312020 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (11.195689367s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"167be756-cf99-47d6-97b9-61dfd266e92e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-312020] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"edd74edc-782f-4141-8a21-b41fef89c60a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21800"}}
	{"specversion":"1.0","id":"7bb98c07-e7c2-4069-9518-39cd99bfad5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7502a423-f86b-432c-a117-863583fc2887","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig"}}
	{"specversion":"1.0","id":"03061f8a-a630-4ef4-95a4-d3dcb3316fff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube"}}
	{"specversion":"1.0","id":"62d22c12-3cfa-4788-a2f9-b9095084cb68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a98bc45c-651a-4199-8ac7-b2154acf758b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"88d84e6f-68ef-4df8-9451-da28f797fdbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"831187e2-0bb2-4e3c-ad67-2ae7c22cc5e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"6cafa818-9650-44cd-a862-da390b0a92b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"cf7ec393-dae1-4c6a-921e-2b766ab1bb56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"c89d2ec6-a1c5-4dab-85e7-dec32e15d33b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-312020\" primary control-plane node in \"insufficient-storage-312020\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"eebe0895-8ad6-45a2-8150-d0bc8a53a115","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760939008-21773 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"e827362c-8853-43ba-8e04-646c7beb9fdd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"2c07da5a-31e3-49af-a097-f34a6cbea7a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-312020 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-312020 --output=json --layout=cluster: exit status 7 (303.038599ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-312020","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-312020","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1029 09:22:32.512196  134338 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-312020" does not appear in /home/jenkins/minikube-integration/21800-2763/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-312020 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-312020 --output=json --layout=cluster: exit status 7 (303.825709ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-312020","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-312020","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1029 09:22:32.817810  134406 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-312020" does not appear in /home/jenkins/minikube-integration/21800-2763/kubeconfig
	E1029 09:22:32.827791  134406 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/insufficient-storage-312020/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-312020" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-312020
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-312020: (1.99589437s)
--- PASS: TestInsufficientStorage (13.80s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (56.27s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.4141790090 start -p running-upgrade-214661 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.4141790090 start -p running-upgrade-214661 --memory=3072 --vm-driver=docker  --container-runtime=crio: (35.305237408s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-214661 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-214661 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (17.932674201s)
helpers_test.go:175: Cleaning up "running-upgrade-214661" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-214661
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-214661: (2.35088913s)
--- PASS: TestRunningBinaryUpgrade (56.27s)

                                                
                                    
x
+
TestKubernetesUpgrade (356.16s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-392485 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-392485 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.495201921s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-392485
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-392485: (1.428943478s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-392485 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-392485 status --format={{.Host}}: exit status 7 (69.834562ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-392485 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-392485 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m38.001388265s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-392485 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-392485 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-392485 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (113.849229ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-392485] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21800
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-392485
	    minikube start -p kubernetes-upgrade-392485 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3924852 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-392485 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-392485 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-392485 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (33.612572584s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-392485" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-392485
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-392485: (2.311920928s)
--- PASS: TestKubernetesUpgrade (356.16s)

                                                
                                    
x
+
TestMissingContainerUpgrade (144.87s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1224593851 start -p missing-upgrade-648122 --memory=3072 --driver=docker  --container-runtime=crio
E1029 09:23:13.387634    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1224593851 start -p missing-upgrade-648122 --memory=3072 --driver=docker  --container-runtime=crio: (1m11.462184313s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-648122
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-648122
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-648122 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-648122 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m9.541570275s)
helpers_test.go:175: Cleaning up "missing-upgrade-648122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-648122
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-648122: (2.385072513s)
--- PASS: TestMissingContainerUpgrade (144.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-988770 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-988770 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (97.552047ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-988770] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21800
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-988770 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-988770 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (43.301775554s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-988770 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-988770 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-988770 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (6.513520638s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-988770 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-988770 status -o json: exit status 2 (389.60158ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-988770","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-988770
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-988770: (2.139967043s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-988770 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-988770 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (10.489316297s)
--- PASS: TestNoKubernetes/serial/Start (10.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-988770 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-988770 "sudo systemctl is-active --quiet service kubelet": exit status 1 (430.343077ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-arm64 profile list: (3.642239909s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (4.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-988770
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-988770: (1.395782861s)
--- PASS: TestNoKubernetes/serial/Stop (1.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-988770 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-988770 --driver=docker  --container-runtime=crio: (7.120643731s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-988770 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-988770 "sudo systemctl is-active --quiet service kubelet": exit status 1 (281.179382ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (53.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2532852990 start -p stopped-upgrade-802711 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1029 09:25:24.584014    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2532852990 start -p stopped-upgrade-802711 --memory=3072 --vm-driver=docker  --container-runtime=crio: (33.616450627s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2532852990 -p stopped-upgrade-802711 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2532852990 -p stopped-upgrade-802711 stop: (1.238714554s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-802711 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-802711 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.372236386s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (53.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-802711
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-802711: (1.304422359s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.30s)

                                                
                                    
x
+
TestPause/serial/Start (84.99s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-598473 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1029 09:28:13.388061    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-598473 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m24.98678812s)
--- PASS: TestPause/serial/Start (84.99s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (28.89s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-598473 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-598473 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.873566388s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (28.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-937200 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-937200 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (252.196436ms)

                                                
                                                
-- stdout --
	* [false-937200] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21800
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 09:29:41.004638  172775 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:29:41.004854  172775 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:29:41.004883  172775 out.go:374] Setting ErrFile to fd 2...
	I1029 09:29:41.004905  172775 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:29:41.008970  172775 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-2763/.minikube/bin
	I1029 09:29:41.009518  172775 out.go:368] Setting JSON to false
	I1029 09:29:41.010486  172775 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4333,"bootTime":1761725848,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1029 09:29:41.010565  172775 start.go:143] virtualization:  
	I1029 09:29:41.015061  172775 out.go:179] * [false-937200] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1029 09:29:41.018024  172775 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:29:41.018134  172775 notify.go:221] Checking for updates...
	I1029 09:29:41.024411  172775 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:29:41.027467  172775 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-2763/kubeconfig
	I1029 09:29:41.030323  172775 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-2763/.minikube
	I1029 09:29:41.033037  172775 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1029 09:29:41.035781  172775 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:29:41.039107  172775 config.go:182] Loaded profile config "kubernetes-upgrade-392485": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:29:41.039225  172775 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:29:41.071882  172775 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1029 09:29:41.072061  172775 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:29:41.169323  172775 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-29 09:29:41.159118367 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1029 09:29:41.169426  172775 docker.go:319] overlay module found
	I1029 09:29:41.172635  172775 out.go:179] * Using the docker driver based on user configuration
	I1029 09:29:41.175465  172775 start.go:309] selected driver: docker
	I1029 09:29:41.175480  172775 start.go:930] validating driver "docker" against <nil>
	I1029 09:29:41.175492  172775 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:29:41.179022  172775 out.go:203] 
	W1029 09:29:41.181810  172775 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1029 09:29:41.184628  172775 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-937200 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-937200

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-937200

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-937200

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-937200

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-937200

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-937200

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-937200

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-937200

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-937200

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-937200

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937200"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937200"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937200"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-937200

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937200"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937200"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-937200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-937200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-937200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-937200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-937200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-937200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-937200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-937200" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937200"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937200"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937200"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937200"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937200"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-937200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-937200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-937200" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937200"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937200"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937200"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937200"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937200"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 29 Oct 2025 09:29:22 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-392485
contexts:
- context:
cluster: kubernetes-upgrade-392485
extensions:
- extension:
last-update: Wed, 29 Oct 2025 09:29:22 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-392485
name: kubernetes-upgrade-392485
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-392485
user:
client-certificate: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/kubernetes-upgrade-392485/client.crt
client-key: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/kubernetes-upgrade-392485/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-937200

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937200"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937200"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937200"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937200"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937200"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937200"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937200"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937200"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937200"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937200"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937200"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937200"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937200"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937200"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937200"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937200"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937200"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937200"

                                                
                                                
----------------------- debugLogs end: false-937200 [took: 5.257151395s] --------------------------------
helpers_test.go:175: Cleaning up "false-937200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-937200
--- PASS: TestNetworkPlugins/group/false (5.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (59.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-162751 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-162751 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (59.597600407s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (59.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-162751 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1c908328-618a-4e6e-a19d-9960059ef8a7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1c908328-618a-4e6e-a19d-9960059ef8a7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.005148559s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-162751 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-162751 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-162751 --alsologtostderr -v=3: (12.020289766s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-162751 -n old-k8s-version-162751
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-162751 -n old-k8s-version-162751: exit status 7 (79.539043ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-162751 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (49.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-162751 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1029 09:33:13.387723    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-162751 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (49.512518688s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-162751 -n old-k8s-version-162751
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (49.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-dvv98" [7c0cb30a-8153-4136-80e4-1c87bbec948c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00363415s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-dvv98" [7c0cb30a-8153-4136-80e4-1c87bbec948c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011449022s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-162751 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-162751 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (77.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-505993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-505993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m17.524888061s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (77.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (86.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-946178 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1029 09:34:36.476097    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-946178 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m26.224122601s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (86.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-505993 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e30fb005-524d-4e90-8800-e6ce95927686] Pending
helpers_test.go:352: "busybox" [e30fb005-524d-4e90-8800-e6ce95927686] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e30fb005-524d-4e90-8800-e6ce95927686] Running
E1029 09:35:07.654101    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003607231s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-505993 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-505993 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-505993 --alsologtostderr -v=3: (12.052021139s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-946178 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7fa0339a-3020-460c-8bb9-421556d3e0d5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1029 09:35:24.584120    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [7fa0339a-3020-460c-8bb9-421556d3e0d5] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004569026s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-946178 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-505993 -n no-preload-505993
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-505993 -n no-preload-505993: exit status 7 (77.664774ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-505993 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (53.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-505993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-505993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (53.015759663s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-505993 -n no-preload-505993
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (53.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-946178 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-946178 --alsologtostderr -v=3: (12.490718752s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-946178 -n embed-certs-946178
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-946178 -n embed-certs-946178: exit status 7 (89.734504ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-946178 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-946178 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-946178 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (50.752129228s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-946178 -n embed-certs-946178
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (51.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-985l5" [af3605e2-60e5-49fb-9b85-109d52e037a5] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003869234s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-985l5" [af3605e2-60e5-49fb-9b85-109d52e037a5] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00307593s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-505993 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-505993 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9fqk4" [85e456db-0228-4712-8f17-2c28e9122628] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.008937738s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-154565 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-154565 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m26.673218629s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9fqk4" [85e456db-0228-4712-8f17-2c28e9122628] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003934416s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-946178 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-946178 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-194729 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1029 09:37:10.966714    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:37:10.972975    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:37:10.984244    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:37:11.005527    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:37:11.046823    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:37:11.128135    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:37:11.289456    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:37:11.611022    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:37:12.252983    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:37:13.536423    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:37:16.098129    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:37:21.219421    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:37:31.461546    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-194729 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (39.906315319s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-194729 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-194729 --alsologtostderr -v=3: (1.342286012s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-194729 -n newest-cni-194729
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-194729 -n newest-cni-194729: exit status 7 (76.262183ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-194729 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-194729 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1029 09:37:51.942978    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-194729 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (14.853867871s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-194729 -n newest-cni-194729
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-194729 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-154565 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [324380b9-ea13-4bfc-97d9-f38c6b34fd12] Pending
helpers_test.go:352: "busybox" [324380b9-ea13-4bfc-97d9-f38c6b34fd12] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [324380b9-ea13-4bfc-97d9-f38c6b34fd12] Running
E1029 09:38:13.388194    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004241186s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-154565 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (88.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-937200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-937200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m28.823091597s)
--- PASS: TestNetworkPlugins/group/auto/Start (88.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-154565 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-154565 --alsologtostderr -v=3: (12.131855792s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-154565 -n default-k8s-diff-port-154565
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-154565 -n default-k8s-diff-port-154565: exit status 7 (112.548153ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-154565 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1029 09:38:32.904188    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (57.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-154565 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-154565 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (57.531442304s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-154565 -n default-k8s-diff-port-154565
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (57.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zcdsw" [e09fd718-6a85-48c2-a15f-7e68d85c1edf] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003687954s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zcdsw" [e09fd718-6a85-48c2-a15f-7e68d85c1edf] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004351107s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-154565 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-937200 "pgrep -a kubelet"
I1029 09:39:39.582888    4550 config.go:182] Loaded profile config "auto-937200": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-937200 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-z2j6m" [16d1f6e9-443c-4fad-895c-b90d3bb12fa0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-z2j6m" [16d1f6e9-443c-4fad-895c-b90d3bb12fa0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.007823964s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-154565 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-937200 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-937200 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-937200 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (84.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-937200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1029 09:39:54.826120    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:40:04.566130    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:40:04.572698    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:40:04.585175    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:40:04.606500    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:40:04.648705    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:40:04.730040    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:40:04.891917    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:40:05.216437    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:40:05.858707    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:40:07.140257    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:40:09.701715    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-937200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m24.117285568s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (84.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (69.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-937200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1029 09:40:24.584155    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/functional-546837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:40:25.065217    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:40:45.549205    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-937200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m9.333365705s)
--- PASS: TestNetworkPlugins/group/calico/Start (69.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-6tpzd" [c53e9032-c07d-47f7-a96d-b7f425128ff7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004123783s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-937200 "pgrep -a kubelet"
I1029 09:41:22.555839    4550 config.go:182] Loaded profile config "kindnet-937200": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-937200 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-djggn" [4494579b-7f16-4eeb-85e5-1fbcd23a93e6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-djggn" [4494579b-7f16-4eeb-85e5-1fbcd23a93e6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.00368873s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-9jhhm" [55875c7b-ec17-4030-b425-ba905cb58170] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E1029 09:41:26.511450    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "calico-node-9jhhm" [55875c7b-ec17-4030-b425-ba905cb58170] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00410396s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-937200 "pgrep -a kubelet"
I1029 09:41:32.377998    4550 config.go:182] Loaded profile config "calico-937200": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-937200 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bqzhz" [c62eb56c-531a-4db8-a7e0-eec310742201] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bqzhz" [c62eb56c-531a-4db8-a7e0-eec310742201] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.008428078s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-937200 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-937200 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-937200 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-937200 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-937200 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-937200 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (66.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-937200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-937200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m6.559806231s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (66.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (90.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-937200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1029 09:42:10.966217    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:42:38.668044    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/old-k8s-version-162751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:42:48.432757    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/no-preload-505993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-937200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m30.231363733s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (90.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-937200 "pgrep -a kubelet"
I1029 09:43:05.700127    4550 config.go:182] Loaded profile config "custom-flannel-937200": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-937200 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-v88f5" [9112ec52-0cab-4d77-a831-06130b0727ee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1029 09:43:09.086357    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:43:09.092790    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:43:09.104259    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:43:09.125735    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:43:09.167150    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:43:09.250853    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:43:09.412358    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:43:09.734036    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-v88f5" [9112ec52-0cab-4d77-a831-06130b0727ee] Running
E1029 09:43:10.376392    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:43:11.658502    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:43:13.387366    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/addons-757691/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:43:14.219937    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003866771s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-937200 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-937200 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-937200 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (65.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-937200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-937200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m5.231004985s)
--- PASS: TestNetworkPlugins/group/flannel/Start (65.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-937200 "pgrep -a kubelet"
I1029 09:43:40.383707    4550 config.go:182] Loaded profile config "enable-default-cni-937200": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-937200 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-d8ctd" [af1a340a-3f12-475c-900a-79576d39bf25] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-d8ctd" [af1a340a-3f12-475c-900a-79576d39bf25] Running
E1029 09:43:50.066810    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003976715s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-937200 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-937200 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-937200 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (76.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-937200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1029 09:44:31.028690    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/default-k8s-diff-port-154565/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:44:39.837835    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/auto-937200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:44:39.844180    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/auto-937200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:44:39.855502    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/auto-937200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:44:39.876799    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/auto-937200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:44:39.918056    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/auto-937200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:44:39.999956    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/auto-937200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:44:40.162718    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/auto-937200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:44:40.484328    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/auto-937200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:44:41.126156    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/auto-937200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:44:42.407885    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/auto-937200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-937200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m16.680485813s)
--- PASS: TestNetworkPlugins/group/bridge/Start (76.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-bqskw" [d4dbb334-b5d7-4911-b78e-a54e08964f87] Running
E1029 09:44:44.969573    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/auto-937200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003366609s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-937200 "pgrep -a kubelet"
I1029 09:44:49.774447    4550 config.go:182] Loaded profile config "flannel-937200": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-937200 replace --force -f testdata/netcat-deployment.yaml
E1029 09:44:50.090766    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/auto-937200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1029 09:44:50.107245    4550 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hdjpw" [aa67ce5b-2956-4b78-9a2d-4c6c9cb7452f] Pending
helpers_test.go:352: "netcat-cd4db9dbf-hdjpw" [aa67ce5b-2956-4b78-9a2d-4c6c9cb7452f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.041787138s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-937200 exec deployment/netcat -- nslookup kubernetes.default
E1029 09:45:00.339563    4550 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/auto-937200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-937200 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-937200 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-937200 "pgrep -a kubelet"
I1029 09:45:35.390641    4550 config.go:182] Loaded profile config "bridge-937200": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-937200 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bwlhf" [1a25496d-0e93-409f-b072-6480ab0c29dd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bwlhf" [1a25496d-0e93-409f-b072-6480ab0c29dd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004007295s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-937200 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-937200 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-937200 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (31/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-024522 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-024522" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-024522
--- SKIP: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:35: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-012564" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-012564
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-937200 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-937200

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-937200

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-937200

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-937200

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-937200

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-937200

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-937200

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-937200

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-937200

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-937200

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937200"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937200"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937200"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-937200

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937200"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937200"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-937200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-937200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-937200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-937200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-937200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-937200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-937200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-937200" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937200"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937200"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937200"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937200"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937200"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-937200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-937200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-937200" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937200"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937200"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937200"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937200"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937200"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21800-2763/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 29 Oct 2025 09:29:22 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-392485
contexts:
- context:
cluster: kubernetes-upgrade-392485
extensions:
- extension:
last-update: Wed, 29 Oct 2025 09:29:22 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-392485
name: kubernetes-upgrade-392485
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-392485
user:
client-certificate: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/kubernetes-upgrade-392485/client.crt
client-key: /home/jenkins/minikube-integration/21800-2763/.minikube/profiles/kubernetes-upgrade-392485/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-937200

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937200"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937200"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937200"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937200"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937200"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937200"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937200"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937200"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937200"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937200"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937200"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937200"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937200"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937200"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937200"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937200"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937200"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937200"

                                                
                                                
----------------------- debugLogs end: kubenet-937200 [took: 3.583937066s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-937200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-937200
--- SKIP: TestNetworkPlugins/group/kubenet (3.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-937200 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-937200

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-937200

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-937200

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-937200

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-937200

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-937200

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-937200

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-937200

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-937200

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-937200

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937200"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937200"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937200"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-937200

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937200"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937200"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-937200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-937200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-937200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-937200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-937200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-937200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-937200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-937200" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937200"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937200"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937200"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937200"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937200"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-937200

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-937200

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-937200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-937200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-937200

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-937200

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-937200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-937200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-937200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-937200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-937200" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937200"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937200"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937200"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937200"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937200"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-937200

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937200"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937200"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937200"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937200"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937200"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937200"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937200"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937200"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937200"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937200"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937200"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937200"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937200"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937200"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937200"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937200"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937200"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-937200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937200"

                                                
                                                
----------------------- debugLogs end: cilium-937200 [took: 5.121411955s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-937200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-937200
--- SKIP: TestNetworkPlugins/group/cilium (5.33s)

                                                
                                    
Copied to clipboard